Apr 21 10:11:21.991879 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:11:21.991898 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:11:21.991907 kernel: BIOS-provided physical RAM map: Apr 21 10:11:21.991912 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:11:21.991916 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 21 10:11:21.991921 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 21 10:11:21.991926 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 21 10:11:21.991930 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 21 10:11:21.991935 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 21 10:11:21.991939 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 21 10:11:21.991944 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 21 10:11:21.991951 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 21 10:11:21.991955 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 21 10:11:21.991960 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 21 10:11:21.991965 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 10:11:21.991970 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:11:21.991977 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 10:11:21.991981 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 21 10:11:21.991986 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:11:21.991990 kernel: NX (Execute Disable) protection: active Apr 21 10:11:21.991995 kernel: APIC: Static calls initialized Apr 21 10:11:21.992000 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 10:11:21.992005 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Apr 21 10:11:21.992009 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 21 10:11:21.992014 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 21 10:11:21.992019 kernel: SMBIOS 3.0.0 present. Apr 21 10:11:21.992024 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 21 10:11:21.992028 kernel: Hypervisor detected: KVM Apr 21 10:11:21.992035 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:11:21.992040 kernel: kvm-clock: using sched offset of 12757477216 cycles Apr 21 10:11:21.992044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:11:21.992049 kernel: tsc: Detected 2396.398 MHz processor Apr 21 10:11:21.992054 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:11:21.992059 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:11:21.992064 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 21 10:11:21.992069 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:11:21.992073 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:11:21.992081 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 21 10:11:21.992085 kernel: Using GB pages for direct mapping Apr 21 10:11:21.992090 kernel: Secure boot disabled Apr 21 10:11:21.992098 kernel: ACPI: Early table checksum verification disabled Apr 21 10:11:21.992103 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 21 10:11:21.992108 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:11:21.992113 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:21.992120 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:21.992125 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 21 10:11:21.992130 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:21.992135 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:21.992140 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:21.992145 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:11:21.992150 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:11:21.992157 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 21 10:11:21.992162 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 21 10:11:21.992167 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 21 10:11:21.992172 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 21 10:11:21.992177 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 21 10:11:21.992182 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 21 10:11:21.992187 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 21 10:11:21.992224 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 21 10:11:21.992230 kernel: No NUMA configuration found Apr 21 10:11:21.992237 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 21 10:11:21.992243 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 21 10:11:21.992248 kernel: Zone ranges: Apr 21 10:11:21.992253 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:11:21.992258 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:11:21.992263 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:11:21.992268 kernel: Movable zone start for each node Apr 21 10:11:21.992273 kernel: Early memory node ranges Apr 21 10:11:21.992278 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:11:21.992283 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 21 10:11:21.992290 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 21 10:11:21.992295 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 21 10:11:21.992300 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:11:21.992305 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 21 10:11:21.992310 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:11:21.992315 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:11:21.992320 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 21 10:11:21.992325 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 21 10:11:21.992329 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 21 10:11:21.992337 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 10:11:21.992342 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:11:21.992347 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:11:21.992352 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:11:21.992357 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:11:21.992362 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:11:21.992366 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:11:21.992371 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:11:21.992376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:11:21.992384 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:11:21.992389 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:11:21.992394 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:11:21.992399 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:11:21.992404 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 21 10:11:21.992409 kernel: Booting paravirtualized kernel on KVM Apr 21 10:11:21.992414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:11:21.992419 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:11:21.992424 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:11:21.992431 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:11:21.992436 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:11:21.992441 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 21 10:11:21.992447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:11:21.992452 kernel: random: crng init done Apr 21 10:11:21.992457 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:11:21.992462 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:11:21.992467 kernel: Fallback order for Node 0: 0 Apr 21 10:11:21.992474 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 21 10:11:21.992479 kernel: Policy zone: Normal Apr 21 10:11:21.992484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:11:21.992489 kernel: software IO TLB: area num 2. Apr 21 10:11:21.992494 kernel: Memory: 3819396K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 271568K reserved, 0K cma-reserved) Apr 21 10:11:21.992499 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:11:21.992511 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:11:21.992530 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:11:21.992540 kernel: Dynamic Preempt: voluntary Apr 21 10:11:21.992548 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:11:21.992554 kernel: rcu: RCU event tracing is enabled. Apr 21 10:11:21.992559 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:11:21.992564 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:11:21.992577 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:11:21.992585 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:11:21.992590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:11:21.992600 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:11:21.992605 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:11:21.992610 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:11:21.992615 kernel: Console: colour dummy device 80x25 Apr 21 10:11:21.992621 kernel: printk: console [tty0] enabled Apr 21 10:11:21.992629 kernel: printk: console [ttyS0] enabled Apr 21 10:11:21.992634 kernel: ACPI: Core revision 20230628 Apr 21 10:11:21.992639 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:11:21.992645 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:11:21.992650 kernel: x2apic enabled Apr 21 10:11:21.992657 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:11:21.992663 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:11:21.992668 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:11:21.992673 kernel: Calibrating delay loop (skipped) preset value.. 4792.79 BogoMIPS (lpj=2396398) Apr 21 10:11:21.992678 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:11:21.992684 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:11:21.992689 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:11:21.992694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:11:21.992700 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 21 10:11:21.992708 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:11:21.992713 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:11:21.992718 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:11:21.992725 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 21 10:11:21.992731 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:11:21.992736 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:11:21.992741 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:11:21.992747 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:11:21.992752 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:11:21.992759 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:11:21.992765 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:11:21.992770 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:11:21.992775 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:11:21.992780 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:11:21.992786 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:11:21.992791 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:11:21.992796 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:11:21.992801 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 21 10:11:21.992809 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 21 10:11:21.992814 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:11:21.992819 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:11:21.992825 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:11:21.992830 kernel: landlock: Up and running. Apr 21 10:11:21.992835 kernel: SELinux: Initializing. Apr 21 10:11:21.992841 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:11:21.992846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:11:21.992851 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 21 10:11:21.992859 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:11:21.992864 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:11:21.992869 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:11:21.992875 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:11:21.992880 kernel: ... version: 0 Apr 21 10:11:21.992885 kernel: ... bit width: 48 Apr 21 10:11:21.992890 kernel: ... generic registers: 6 Apr 21 10:11:21.992895 kernel: ... value mask: 0000ffffffffffff Apr 21 10:11:21.992903 kernel: ... max period: 00007fffffffffff Apr 21 10:11:21.992908 kernel: ... fixed-purpose events: 0 Apr 21 10:11:21.992913 kernel: ... event mask: 000000000000003f Apr 21 10:11:21.992919 kernel: signal: max sigframe size: 3376 Apr 21 10:11:21.992924 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:11:21.992929 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:11:21.992934 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:11:21.992940 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:11:21.992945 kernel: .... node #0, CPUs: #1 Apr 21 10:11:21.992950 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:11:21.992958 kernel: smpboot: Max logical packages: 1 Apr 21 10:11:21.992963 kernel: smpboot: Total of 2 processors activated (9585.59 BogoMIPS) Apr 21 10:11:21.992969 kernel: devtmpfs: initialized Apr 21 10:11:21.992974 kernel: x86/mm: Memory block size: 128MB Apr 21 10:11:21.992979 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 21 10:11:21.992984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:11:21.992990 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:11:21.992995 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:11:21.993000 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:11:21.993008 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:11:21.993013 kernel: audit: type=2000 audit(1776766281.263:1): state=initialized audit_enabled=0 res=1 Apr 21 10:11:21.993018 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:11:21.993023 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:11:21.993029 kernel: cpuidle: using governor menu Apr 21 10:11:21.993034 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:11:21.993039 kernel: dca service started, version 1.12.1 Apr 21 10:11:21.993045 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 21 10:11:21.993050 kernel: PCI: Using configuration type 1 for base access Apr 21 10:11:21.993058 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:11:21.993063 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:11:21.993068 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:11:21.993073 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:11:21.993078 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:11:21.993084 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:11:21.993089 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:11:21.993094 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:11:21.993099 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:11:21.993107 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:11:21.993112 kernel: ACPI: Interpreter enabled Apr 21 10:11:21.993117 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:11:21.993122 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:11:21.993128 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:11:21.993133 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:11:21.993138 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:11:21.993143 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:11:21.993403 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:11:21.993518 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:11:21.993619 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:11:21.993626 kernel: PCI host bridge to bus 0000:00 Apr 21 10:11:21.993728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:11:21.993819 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:11:21.993907 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:11:21.994017 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 21 10:11:21.994111 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 21 10:11:21.994222 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:11:21.994312 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:11:21.994425 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:11:21.994534 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 21 10:11:21.994636 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 21 10:11:21.994732 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 21 10:11:21.994829 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 21 10:11:21.994932 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:11:21.995039 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:11:21.995138 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:11:21.995714 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.995827 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 21 10:11:21.995934 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.996032 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 21 10:11:21.996139 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.996275 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 21 10:11:21.996383 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.996484 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 21 10:11:21.996587 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.996684 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 21 10:11:21.996787 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.996885 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 21 10:11:21.996988 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.997086 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 21 10:11:21.997208 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.997308 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 21 10:11:21.997411 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:11:21.997507 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 21 10:11:21.997608 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:11:21.997704 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:11:21.997813 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:11:21.997909 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 21 10:11:21.998022 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 21 10:11:21.998137 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:11:21.998323 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 21 10:11:21.998434 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:11:21.998537 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 21 10:11:21.998637 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 21 10:11:21.998737 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:11:21.998846 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:11:21.998955 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:11:21.999052 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:11:21.999644 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 10:11:21.999785 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 21 10:11:21.999888 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:11:21.999985 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:11:22.000097 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 10:11:22.000221 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 21 10:11:22.000324 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 21 10:11:22.000427 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:11:22.000522 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:11:22.000620 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:11:22.000729 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 10:11:22.000830 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 21 10:11:22.000927 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:11:22.001023 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:11:22.001132 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 10:11:22.001490 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 21 10:11:22.001598 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 21 10:11:22.001700 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:11:22.001829 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:11:22.001929 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:11:22.004278 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 10:11:22.004396 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 21 10:11:22.004503 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 21 10:11:22.004603 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:11:22.004699 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:11:22.004794 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:11:22.004801 kernel: acpiphp: Slot [0] registered Apr 21 10:11:22.004911 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:11:22.005044 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 21 10:11:22.005183 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 21 10:11:22.006081 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:11:22.006189 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:11:22.006393 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:11:22.006489 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:11:22.006496 kernel: acpiphp: Slot [0-2] registered Apr 21 10:11:22.006609 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:11:22.006707 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:11:22.006806 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:11:22.006813 kernel: acpiphp: Slot [0-3] registered Apr 21 10:11:22.006910 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:11:22.007013 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:11:22.007110 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:11:22.007116 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:11:22.007122 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:11:22.007127 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:11:22.007133 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:11:22.007141 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:11:22.007147 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:11:22.007152 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:11:22.007158 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:11:22.007163 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:11:22.007169 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:11:22.007175 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:11:22.007180 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:11:22.007186 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:11:22.007215 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:11:22.007221 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:11:22.007226 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:11:22.007232 kernel: iommu: Default domain type: Translated Apr 21 10:11:22.007238 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:11:22.007243 kernel: efivars: Registered efivars operations Apr 21 10:11:22.007248 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:11:22.007254 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:11:22.007260 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 21 10:11:22.007267 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 21 10:11:22.007274 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 21 10:11:22.007280 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 21 10:11:22.007385 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:11:22.007480 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:11:22.007575 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:11:22.007582 kernel: vgaarb: loaded Apr 21 10:11:22.007587 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:11:22.007593 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:11:22.007601 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:11:22.007607 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:11:22.007613 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:11:22.007618 kernel: pnp: PnP ACPI init Apr 21 10:11:22.007728 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 21 10:11:22.007736 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:11:22.007741 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:11:22.007747 kernel: NET: Registered PF_INET protocol family Apr 21 10:11:22.007768 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:11:22.007776 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:11:22.007781 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:11:22.007787 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:11:22.007793 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:11:22.007799 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:11:22.007805 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:11:22.007811 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:11:22.007818 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:11:22.007824 kernel: NET: Registered PF_XDP protocol family Apr 21 10:11:22.007929 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:11:22.008030 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:11:22.008148 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 10:11:22.009902 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 10:11:22.010018 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 10:11:22.010119 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 10:11:22.010276 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 10:11:22.010377 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 10:11:22.010481 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 21 10:11:22.010581 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:11:22.010716 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:11:22.010848 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:11:22.010954 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:11:22.011060 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:11:22.011160 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:11:22.013327 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:11:22.013437 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:11:22.013538 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:11:22.013634 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:11:22.013736 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:11:22.013833 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:11:22.013928 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:11:22.014038 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:11:22.014137 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:11:22.014254 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:11:22.014357 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 21 10:11:22.014458 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:11:22.014552 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 21 10:11:22.014647 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:11:22.014741 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:11:22.014838 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:11:22.014934 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 21 10:11:22.015039 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:11:22.015151 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:11:22.017697 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:11:22.017842 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 21 10:11:22.017977 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:11:22.018106 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:11:22.018788 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:11:22.018903 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:11:22.019028 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:11:22.019132 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 21 10:11:22.019267 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 21 10:11:22.019392 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:11:22.019709 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 21 10:11:22.020975 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:11:22.021143 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 21 10:11:22.021351 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 21 10:11:22.021494 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:11:22.021647 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:11:22.021793 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 21 10:11:22.021932 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:11:22.022085 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 21 10:11:22.022264 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:11:22.022423 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 21 10:11:22.022564 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 21 10:11:22.022706 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:11:22.022856 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 21 10:11:22.022998 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 21 10:11:22.023152 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:11:22.023336 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 21 10:11:22.023484 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 21 10:11:22.023626 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:11:22.023641 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:11:22.023651 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:11:22.023660 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:11:22.023670 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 21 10:11:22.023683 kernel: Initialise system trusted keyrings Apr 21 10:11:22.023693 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:11:22.023702 kernel: Key type asymmetric registered Apr 21 10:11:22.023710 kernel: Asymmetric key parser 'x509' registered Apr 21 10:11:22.023719 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:11:22.023728 kernel: io scheduler mq-deadline registered Apr 21 10:11:22.023737 kernel: io scheduler kyber registered Apr 21 10:11:22.023746 kernel: io scheduler bfq registered Apr 21 10:11:22.023897 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 21 10:11:22.024074 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 21 10:11:22.024298 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 21 10:11:22.024453 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 21 10:11:22.024601 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 21 10:11:22.024748 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 21 10:11:22.024896 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 21 10:11:22.025044 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 21 10:11:22.025218 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 21 10:11:22.025371 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 21 10:11:22.025531 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 21 10:11:22.025681 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 21 10:11:22.025832 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 21 10:11:22.025981 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 21 10:11:22.026132 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 21 10:11:22.026328 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 21 10:11:22.026344 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:11:22.026489 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 21 10:11:22.026642 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 21 10:11:22.026657 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:11:22.026667 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 21 10:11:22.026676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:11:22.026685 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:11:22.026695 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:11:22.026704 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:11:22.026713 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:11:22.026865 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:11:22.026886 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:11:22.027024 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:11:22.027164 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:11:21 UTC (1776766281) Apr 21 10:11:22.027470 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:11:22.027488 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:11:22.027498 kernel: efifb: probing for efifb Apr 21 10:11:22.027507 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 21 10:11:22.027522 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 21 10:11:22.027531 kernel: efifb: scrolling: redraw Apr 21 10:11:22.027540 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:11:22.027549 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:11:22.027558 kernel: fb0: EFI VGA frame buffer device Apr 21 10:11:22.027568 kernel: pstore: Using crash dump compression: deflate Apr 21 10:11:22.027576 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:11:22.027585 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:11:22.027594 kernel: Segment Routing with IPv6 Apr 21 10:11:22.027606 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:11:22.027616 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:11:22.027625 kernel: Key type dns_resolver registered Apr 21 10:11:22.027634 kernel: IPI shorthand broadcast: enabled Apr 21 10:11:22.027644 kernel: sched_clock: Marking stable (1388011012, 217061656)->(1661439281, -56366613) Apr 21 10:11:22.027653 kernel: registered taskstats version 1 Apr 21 10:11:22.027662 kernel: Loading compiled-in X.509 certificates Apr 21 10:11:22.027671 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:11:22.027680 kernel: Key type .fscrypt registered Apr 21 10:11:22.027689 kernel: Key type fscrypt-provisioning registered Apr 21 10:11:22.027702 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:11:22.027711 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:11:22.027720 kernel: ima: No architecture policies found Apr 21 10:11:22.027728 kernel: clk: Disabling unused clocks Apr 21 10:11:22.027738 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:11:22.027747 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:11:22.027756 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:11:22.027765 kernel: Run /init as init process Apr 21 10:11:22.027776 kernel: with arguments: Apr 21 10:11:22.027789 kernel: /init Apr 21 10:11:22.027798 kernel: with environment: Apr 21 10:11:22.027806 kernel: HOME=/ Apr 21 10:11:22.027815 kernel: TERM=linux Apr 21 10:11:22.027827 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:11:22.027839 systemd[1]: Detected virtualization kvm. Apr 21 10:11:22.027852 systemd[1]: Detected architecture x86-64. Apr 21 10:11:22.027861 systemd[1]: Running in initrd. Apr 21 10:11:22.027871 systemd[1]: No hostname configured, using default hostname. Apr 21 10:11:22.027880 systemd[1]: Hostname set to . Apr 21 10:11:22.027890 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:11:22.027899 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:11:22.027908 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:11:22.027918 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:11:22.027928 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:11:22.027941 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:11:22.027951 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:11:22.027960 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:11:22.027971 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:11:22.027980 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:11:22.027990 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:11:22.028003 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:11:22.028012 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:11:22.028021 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:11:22.028031 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:11:22.028041 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:11:22.028050 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:11:22.028059 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:11:22.028068 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:11:22.028078 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:11:22.028090 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:11:22.028100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:11:22.028109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:11:22.028118 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:11:22.028128 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:11:22.028138 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:11:22.028147 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:11:22.028156 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:11:22.028165 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:11:22.028178 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:11:22.028241 systemd-journald[189]: Collecting audit messages is disabled. Apr 21 10:11:22.028282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:22.028296 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:11:22.028306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:11:22.028315 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:11:22.028325 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:11:22.028336 systemd-journald[189]: Journal started Apr 21 10:11:22.028359 systemd-journald[189]: Runtime Journal (/run/log/journal/ae597c60f3954dd48ea1d53b795c0f47) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:11:22.028417 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:11:21.991678 systemd-modules-load[190]: Inserted module 'overlay' Apr 21 10:11:22.034861 kernel: Bridge firewalling registered Apr 21 10:11:22.033426 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 21 10:11:22.038254 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:11:22.038891 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:11:22.039531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:22.040414 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:11:22.046305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:11:22.048346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:11:22.051408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:11:22.055522 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:11:22.068641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:11:22.069608 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:11:22.076192 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:11:22.088355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:11:22.088949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:22.094412 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:11:22.104538 dracut-cmdline[226]: dracut-dracut-053 Apr 21 10:11:22.107465 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:11:22.115456 systemd-resolved[224]: Positive Trust Anchors: Apr 21 10:11:22.115472 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:11:22.115499 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:11:22.118127 systemd-resolved[224]: Defaulting to hostname 'linux'. Apr 21 10:11:22.119136 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:11:22.119675 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:11:22.188256 kernel: SCSI subsystem initialized Apr 21 10:11:22.198237 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:11:22.208229 kernel: iscsi: registered transport (tcp) Apr 21 10:11:22.224545 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:11:22.224597 kernel: QLogic iSCSI HBA Driver Apr 21 10:11:22.283439 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:11:22.292433 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:11:22.318417 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:11:22.318489 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:11:22.322168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:11:22.369235 kernel: raid6: avx512x4 gen() 32874 MB/s Apr 21 10:11:22.387227 kernel: raid6: avx512x2 gen() 45244 MB/s Apr 21 10:11:22.405224 kernel: raid6: avx512x1 gen() 42418 MB/s Apr 21 10:11:22.423235 kernel: raid6: avx2x4 gen() 44478 MB/s Apr 21 10:11:22.441253 kernel: raid6: avx2x2 gen() 48366 MB/s Apr 21 10:11:22.460290 kernel: raid6: avx2x1 gen() 37529 MB/s Apr 21 10:11:22.460342 kernel: raid6: using algorithm avx2x2 gen() 48366 MB/s Apr 21 10:11:22.480412 kernel: raid6: .... xor() 36166 MB/s, rmw enabled Apr 21 10:11:22.480471 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:11:22.497266 kernel: xor: automatically using best checksumming function avx Apr 21 10:11:22.610270 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:11:22.628095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:11:22.637484 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:11:22.647796 systemd-udevd[408]: Using default interface naming scheme 'v255'. Apr 21 10:11:22.651776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:11:22.658425 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:11:22.681010 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Apr 21 10:11:22.721375 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:11:22.728422 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:11:22.800060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:11:22.809530 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:11:22.819577 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:11:22.822459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:11:22.823393 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:11:22.824673 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:11:22.833318 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:11:22.848576 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:11:22.894220 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:11:22.901236 kernel: libata version 3.00 loaded. Apr 21 10:11:22.913222 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:11:22.916244 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:11:22.922780 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:11:22.922938 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:11:22.925592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:11:22.940775 kernel: ACPI: bus type USB registered Apr 21 10:11:22.940787 kernel: usbcore: registered new interface driver usbfs Apr 21 10:11:22.940796 kernel: usbcore: registered new interface driver hub Apr 21 10:11:22.940804 kernel: usbcore: registered new device driver usb Apr 21 10:11:22.940812 kernel: scsi host1: ahci Apr 21 10:11:22.940953 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:11:22.940973 kernel: scsi host2: ahci Apr 21 10:11:22.925728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:22.926180 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:11:22.926510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:22.926622 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:22.926947 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:22.947228 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:11:22.950257 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:11:22.950473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:22.955523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:22.955610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:22.965598 kernel: AES CTR mode by8 optimization enabled Apr 21 10:11:22.964400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:22.975236 kernel: scsi host3: ahci Apr 21 10:11:22.985465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:22.989078 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:11:22.989308 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 10:11:22.996455 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 10:11:22.996726 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:11:23.004901 kernel: scsi host4: ahci Apr 21 10:11:23.011696 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:11:23.011850 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:11:23.011993 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 10:11:23.012109 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 21 10:11:23.012254 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 10:11:23.015321 kernel: hub 1-0:1.0: USB hub found Apr 21 10:11:23.019575 kernel: scsi host5: ahci Apr 21 10:11:23.019703 kernel: hub 1-0:1.0: 4 ports detected Apr 21 10:11:23.019822 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:11:23.019956 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:11:23.030396 kernel: scsi host6: ahci Apr 21 10:11:23.030601 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 42 Apr 21 10:11:23.030618 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 10:11:23.030754 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 42 Apr 21 10:11:23.030762 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 42 Apr 21 10:11:23.030770 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 42 Apr 21 10:11:23.032945 kernel: hub 2-0:1.0: USB hub found Apr 21 10:11:23.033115 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 42 Apr 21 10:11:23.033124 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 42 Apr 21 10:11:23.035236 kernel: hub 2-0:1.0: 4 ports detected Apr 21 10:11:23.042231 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:11:23.048871 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:11:23.048902 kernel: GPT:17805311 != 160006143 Apr 21 10:11:23.048911 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:11:23.050675 kernel: GPT:17805311 != 160006143 Apr 21 10:11:23.053431 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:11:23.053454 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:23.058233 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:11:23.059508 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:23.268278 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 10:11:23.356480 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:23.356582 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:23.357258 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:23.362280 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:23.367266 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:11:23.380809 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:11:23.380869 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:11:23.380892 kernel: ata1.00: applying bridge limits Apr 21 10:11:23.388371 kernel: ata1.00: configured for UDMA/100 Apr 21 10:11:23.395337 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:11:23.452472 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:11:23.464347 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:11:23.464563 kernel: usbcore: registered new interface driver usbhid Apr 21 10:11:23.464580 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:11:23.464588 kernel: usbhid: USB HID core driver Apr 21 10:11:23.484739 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (472) Apr 21 10:11:23.493415 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:11:23.495308 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 21 10:11:23.502219 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 10:11:23.502415 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (477) Apr 21 10:11:23.507892 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:11:23.509271 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:11:23.513637 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:11:23.514583 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:11:23.521584 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:11:23.526450 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:11:23.531081 disk-uuid[585]: Primary Header is updated. Apr 21 10:11:23.531081 disk-uuid[585]: Secondary Entries is updated. Apr 21 10:11:23.531081 disk-uuid[585]: Secondary Header is updated. Apr 21 10:11:24.546594 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:11:24.551296 disk-uuid[587]: The operation has completed successfully. Apr 21 10:11:24.616411 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:11:24.616508 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:11:24.624310 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:11:24.627390 sh[599]: Success Apr 21 10:11:24.639216 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:11:24.677924 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:11:24.685266 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:11:24.685741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:11:24.717963 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:11:24.718011 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:24.718020 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:11:24.721237 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:11:24.723549 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:11:24.734228 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:11:24.737512 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:11:24.739472 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:11:24.744448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:11:24.748596 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:11:24.768500 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:24.768543 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:24.775860 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:11:24.784766 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:11:24.784794 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:11:24.799718 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:24.799420 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:11:24.806671 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:11:24.810357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:11:24.882574 ignition[703]: Ignition 2.19.0 Apr 21 10:11:24.882587 ignition[703]: Stage: fetch-offline Apr 21 10:11:24.882622 ignition[703]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:24.882631 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:24.884813 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:11:24.882719 ignition[703]: parsed url from cmdline: "" Apr 21 10:11:24.882723 ignition[703]: no config URL provided Apr 21 10:11:24.882728 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:11:24.882735 ignition[703]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:11:24.882740 ignition[703]: failed to fetch config: resource requires networking Apr 21 10:11:24.883052 ignition[703]: Ignition finished successfully Apr 21 10:11:24.887397 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:11:24.894380 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:11:24.911403 systemd-networkd[786]: lo: Link UP Apr 21 10:11:24.911413 systemd-networkd[786]: lo: Gained carrier Apr 21 10:11:24.913776 systemd-networkd[786]: Enumeration completed Apr 21 10:11:24.913964 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:11:24.914420 systemd[1]: Reached target network.target - Network. Apr 21 10:11:24.914577 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:24.914581 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:11:24.915373 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:24.915377 systemd-networkd[786]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:11:24.915977 systemd-networkd[786]: eth0: Link UP Apr 21 10:11:24.915981 systemd-networkd[786]: eth0: Gained carrier Apr 21 10:11:24.915988 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:24.918407 systemd-networkd[786]: eth1: Link UP Apr 21 10:11:24.918411 systemd-networkd[786]: eth1: Gained carrier Apr 21 10:11:24.918418 systemd-networkd[786]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:24.921305 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:11:24.932209 ignition[790]: Ignition 2.19.0 Apr 21 10:11:24.932218 ignition[790]: Stage: fetch Apr 21 10:11:24.932350 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:24.932360 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:24.932439 ignition[790]: parsed url from cmdline: "" Apr 21 10:11:24.932443 ignition[790]: no config URL provided Apr 21 10:11:24.932447 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:11:24.932455 ignition[790]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:11:24.932470 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 10:11:24.932603 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:11:24.949250 systemd-networkd[786]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:11:24.973244 systemd-networkd[786]: eth0: DHCPv4 address 37.27.23.25/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:11:25.132892 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 10:11:25.142064 ignition[790]: GET result: OK Apr 21 10:11:25.142257 ignition[790]: parsing config with SHA512: c01f55ef150d602a26898fc8ec8686af6b55948a6272041a10b3d35d0fd72ff41f7d117194ff8d9f5086ad5fbd17cbcad8f34e397676962e9209b5f1264756f2 Apr 21 10:11:25.148795 unknown[790]: fetched base config from "system" Apr 21 10:11:25.148821 unknown[790]: fetched base config from "system" Apr 21 10:11:25.149516 ignition[790]: fetch: fetch complete Apr 21 10:11:25.148836 unknown[790]: fetched user config from "hetzner" Apr 21 10:11:25.149532 ignition[790]: fetch: fetch passed Apr 21 10:11:25.149615 ignition[790]: Ignition finished successfully Apr 21 10:11:25.155062 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:11:25.163452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:11:25.199234 ignition[797]: Ignition 2.19.0 Apr 21 10:11:25.199256 ignition[797]: Stage: kargs Apr 21 10:11:25.199531 ignition[797]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:25.199553 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:25.202936 ignition[797]: kargs: kargs passed Apr 21 10:11:25.205723 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:11:25.203020 ignition[797]: Ignition finished successfully Apr 21 10:11:25.213463 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:11:25.249400 ignition[804]: Ignition 2.19.0 Apr 21 10:11:25.249426 ignition[804]: Stage: disks Apr 21 10:11:25.249686 ignition[804]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:25.249707 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:25.251024 ignition[804]: disks: disks passed Apr 21 10:11:25.253523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:11:25.251108 ignition[804]: Ignition finished successfully Apr 21 10:11:25.255650 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:11:25.257079 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:11:25.258632 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:11:25.260124 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:11:25.261623 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:11:25.270436 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:11:25.296776 systemd-fsck[813]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 10:11:25.304347 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:11:25.311652 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:11:25.403227 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:11:25.404000 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:11:25.404851 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:11:25.410260 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:11:25.413277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:11:25.414919 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 10:11:25.419059 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:11:25.419273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:11:25.428499 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:11:25.439341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:11:25.453301 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (821) Apr 21 10:11:25.453320 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:25.453329 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:25.453343 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:11:25.473356 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:11:25.473422 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:11:25.476280 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:11:25.483675 coreos-metadata[823]: Apr 21 10:11:25.483 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 10:11:25.484938 coreos-metadata[823]: Apr 21 10:11:25.484 INFO Fetch successful Apr 21 10:11:25.486277 coreos-metadata[823]: Apr 21 10:11:25.486 INFO wrote hostname ci-4081-3-7-7-16e5f88171 to /sysroot/etc/hostname Apr 21 10:11:25.487866 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:11:25.490127 initrd-setup-root[849]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:11:25.494871 initrd-setup-root[856]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:11:25.499102 initrd-setup-root[863]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:11:25.502337 initrd-setup-root[870]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:11:25.580456 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:11:25.585292 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:11:25.590218 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:11:25.597239 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:25.614375 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:11:25.616004 ignition[941]: INFO : Ignition 2.19.0 Apr 21 10:11:25.617952 ignition[941]: INFO : Stage: mount Apr 21 10:11:25.617952 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:25.617952 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:25.617952 ignition[941]: INFO : mount: mount passed Apr 21 10:11:25.617952 ignition[941]: INFO : Ignition finished successfully Apr 21 10:11:25.618574 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:11:25.624290 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:11:25.713792 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:11:25.718300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:11:25.731237 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (952) Apr 21 10:11:25.731274 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:11:25.735021 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:11:25.735052 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:11:25.749823 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:11:25.749910 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:11:25.754225 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:11:25.783640 ignition[968]: INFO : Ignition 2.19.0 Apr 21 10:11:25.783640 ignition[968]: INFO : Stage: files Apr 21 10:11:25.786215 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:25.786215 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:25.786215 ignition[968]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:11:25.786215 ignition[968]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:11:25.786215 ignition[968]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:11:25.790714 ignition[968]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:11:25.790714 ignition[968]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:11:25.790714 ignition[968]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:11:25.790714 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:11:25.790714 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:11:25.788784 unknown[968]: wrote ssh authorized keys file for user: core Apr 21 10:11:26.066494 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:11:26.372427 systemd-networkd[786]: eth0: Gained IPv6LL Apr 21 10:11:26.496998 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:11:26.496998 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:11:26.500514 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 10:11:26.629000 systemd-networkd[786]: eth1: Gained IPv6LL Apr 21 10:11:26.894329 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:11:27.192234 ignition[968]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 10:11:27.192234 ignition[968]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:11:27.196773 ignition[968]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:11:27.196773 ignition[968]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:11:27.196773 ignition[968]: INFO : files: files passed Apr 21 10:11:27.196773 ignition[968]: INFO : Ignition finished successfully Apr 21 10:11:27.196918 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:11:27.206362 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:11:27.210307 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:11:27.219627 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:11:27.220523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:11:27.223922 initrd-setup-root-after-ignition[1001]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:11:27.225011 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:11:27.225011 initrd-setup-root-after-ignition[997]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:11:27.228136 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:11:27.230420 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:11:27.236327 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:11:27.256747 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:11:27.256956 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:11:27.258540 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:11:27.259779 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:11:27.260255 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:11:27.265332 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:11:27.283293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:11:27.287333 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:11:27.310262 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:11:27.311112 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:11:27.311934 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:11:27.312350 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:11:27.312429 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:11:27.313542 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:11:27.314260 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:11:27.314908 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:11:27.315582 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:11:27.316286 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:11:27.316984 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:11:27.317679 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:11:27.318383 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:11:27.319055 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:11:27.319746 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:11:27.320418 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:11:27.320494 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:11:27.321460 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:11:27.322130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:11:27.322793 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:11:27.323483 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:11:27.323860 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:11:27.323929 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:11:27.324935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:11:27.325013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:11:27.325678 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:11:27.325746 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:11:27.326350 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 10:11:27.326418 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:11:27.331591 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:11:27.331946 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:11:27.332052 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:11:27.335339 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:11:27.335697 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:11:27.335799 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:11:27.336275 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:11:27.336390 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:11:27.341880 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:11:27.341984 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:11:27.345945 ignition[1021]: INFO : Ignition 2.19.0 Apr 21 10:11:27.347420 ignition[1021]: INFO : Stage: umount Apr 21 10:11:27.347420 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:11:27.347420 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:11:27.347420 ignition[1021]: INFO : umount: umount passed Apr 21 10:11:27.347420 ignition[1021]: INFO : Ignition finished successfully Apr 21 10:11:27.350468 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:11:27.350556 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:11:27.352631 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:11:27.352706 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:11:27.353103 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:11:27.353138 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:11:27.354651 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:11:27.354692 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:11:27.355035 systemd[1]: Stopped target network.target - Network. Apr 21 10:11:27.355369 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:11:27.355408 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:11:27.355741 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:11:27.356055 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:11:27.360244 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:11:27.360887 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:11:27.361563 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:11:27.362208 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:11:27.362245 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:11:27.363283 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:11:27.363328 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:11:27.364058 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:11:27.364098 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:11:27.364807 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:11:27.364846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:11:27.365972 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:11:27.366406 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:11:27.367875 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:11:27.368330 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:11:27.368419 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:11:27.369448 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:11:27.369516 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:11:27.371247 systemd-networkd[786]: eth0: DHCPv6 lease lost Apr 21 10:11:27.373866 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:11:27.373973 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:11:27.375286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:11:27.375337 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:11:27.376261 systemd-networkd[786]: eth1: DHCPv6 lease lost Apr 21 10:11:27.377366 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:11:27.377463 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:11:27.378249 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:11:27.378306 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:11:27.382270 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:11:27.382586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:11:27.382629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:11:27.384102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:11:27.384146 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:11:27.385151 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:11:27.385212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:11:27.385974 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:11:27.393381 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:11:27.393495 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:11:27.401812 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:11:27.401959 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:11:27.402903 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:11:27.402941 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:11:27.403549 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:11:27.403579 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:11:27.404333 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:11:27.404371 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:11:27.405611 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:11:27.405649 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:11:27.407041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:11:27.407079 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:11:27.417336 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:11:27.417844 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:11:27.417902 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:11:27.418460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:27.418508 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:27.423274 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:11:27.423379 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:11:27.424422 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:11:27.428290 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:11:27.435108 systemd[1]: Switching root. Apr 21 10:11:27.464883 systemd-journald[189]: Journal stopped Apr 21 10:11:28.483128 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Apr 21 10:11:28.485012 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:11:28.485028 kernel: SELinux: policy capability open_perms=1 Apr 21 10:11:28.485038 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:11:28.485046 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:11:28.485057 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:11:28.485066 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:11:28.485083 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:11:28.485094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:11:28.485103 kernel: audit: type=1403 audit(1776766287.581:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:11:28.485113 systemd[1]: Successfully loaded SELinux policy in 44.071ms. Apr 21 10:11:28.485125 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.296ms. Apr 21 10:11:28.485138 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:11:28.485150 systemd[1]: Detected virtualization kvm. Apr 21 10:11:28.485169 systemd[1]: Detected architecture x86-64. Apr 21 10:11:28.485182 systemd[1]: Detected first boot. Apr 21 10:11:28.485191 systemd[1]: Hostname set to . Apr 21 10:11:28.485212 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:11:28.485221 zram_generator::config[1064]: No configuration found. Apr 21 10:11:28.485233 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:11:28.485242 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:11:28.485251 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:11:28.485260 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:11:28.485270 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:11:28.485282 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:11:28.485291 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:11:28.485299 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:11:28.485310 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:11:28.485321 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:11:28.485329 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:11:28.485341 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:11:28.485350 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:11:28.485360 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:11:28.485369 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:11:28.485378 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:11:28.485389 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:11:28.485398 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:11:28.485407 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:11:28.485415 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:11:28.485424 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:11:28.485433 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:11:28.485444 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:11:28.485453 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:11:28.485462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:11:28.485474 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:11:28.485484 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:11:28.485493 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:11:28.485501 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:11:28.485510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:11:28.485520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:11:28.485528 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:11:28.485540 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:11:28.485549 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:11:28.485558 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:11:28.485566 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:11:28.485575 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:11:28.485584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:28.485593 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:11:28.485601 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:11:28.485610 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:11:28.485621 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:11:28.485630 systemd[1]: Reached target machines.target - Containers. Apr 21 10:11:28.485638 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:11:28.485647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:11:28.485656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:11:28.485665 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:11:28.485673 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:11:28.485682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:11:28.485692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:11:28.485702 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:11:28.485711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:11:28.485720 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:11:28.485729 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:11:28.485738 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:11:28.485747 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:11:28.485755 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:11:28.485766 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:11:28.485778 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:11:28.485786 kernel: ACPI: bus type drm_connector registered Apr 21 10:11:28.485795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:11:28.485804 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:11:28.485813 kernel: loop: module loaded Apr 21 10:11:28.485840 systemd-journald[1154]: Collecting audit messages is disabled. Apr 21 10:11:28.485860 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:11:28.485869 kernel: fuse: init (API version 7.39) Apr 21 10:11:28.485878 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:11:28.485886 systemd[1]: Stopped verity-setup.service. Apr 21 10:11:28.485895 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:28.485904 systemd-journald[1154]: Journal started Apr 21 10:11:28.485920 systemd-journald[1154]: Runtime Journal (/run/log/journal/ae597c60f3954dd48ea1d53b795c0f47) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:11:28.166749 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:11:28.183550 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:11:28.184479 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:11:28.495106 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:11:28.494752 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:11:28.495326 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:11:28.495828 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:11:28.496345 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:11:28.496826 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:11:28.497340 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:11:28.497947 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:11:28.498637 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:11:28.499428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:11:28.499615 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:11:28.500282 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:11:28.500449 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:11:28.501085 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:11:28.501351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:11:28.501978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:11:28.502136 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:11:28.502777 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:11:28.502940 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:11:28.503774 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:11:28.503938 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:11:28.504555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:11:28.505136 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:11:28.505827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:11:28.516177 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:11:28.523280 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:11:28.527942 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:11:28.528669 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:11:28.528743 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:11:28.529841 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:11:28.535323 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:11:28.538299 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:11:28.539343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:11:28.545433 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:11:28.548295 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:11:28.548808 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:11:28.553325 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:11:28.553695 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:11:28.559325 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:11:28.561559 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:11:28.563615 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:11:28.566850 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:11:28.570035 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:11:28.571053 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:11:28.592626 systemd-journald[1154]: Time spent on flushing to /var/log/journal/ae597c60f3954dd48ea1d53b795c0f47 is 59.131ms for 1176 entries. Apr 21 10:11:28.592626 systemd-journald[1154]: System Journal (/var/log/journal/ae597c60f3954dd48ea1d53b795c0f47) is 8.0M, max 584.8M, 576.8M free. Apr 21 10:11:28.676550 systemd-journald[1154]: Received client request to flush runtime journal. Apr 21 10:11:28.676585 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:11:28.604879 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:11:28.605363 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:11:28.608644 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:11:28.650686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:11:28.670367 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:11:28.677332 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:11:28.679431 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:11:28.687962 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:11:28.688500 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:11:28.697079 udevadm[1197]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 21 10:11:28.702525 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:11:28.704328 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:11:28.713294 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:11:28.721220 kernel: loop1: detected capacity change from 0 to 217752 Apr 21 10:11:28.729181 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 21 10:11:28.729940 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 21 10:11:28.742873 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:11:28.771218 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:11:28.811218 kernel: loop3: detected capacity change from 0 to 8 Apr 21 10:11:28.828245 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:11:28.852671 kernel: loop5: detected capacity change from 0 to 217752 Apr 21 10:11:28.875561 kernel: loop6: detected capacity change from 0 to 142488 Apr 21 10:11:28.895473 kernel: loop7: detected capacity change from 0 to 8 Apr 21 10:11:28.896115 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 10:11:28.896950 (sd-merge)[1210]: Merged extensions into '/usr'. Apr 21 10:11:28.903299 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:11:28.903311 systemd[1]: Reloading... Apr 21 10:11:28.979223 zram_generator::config[1235]: No configuration found. Apr 21 10:11:29.021499 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:11:29.082597 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:11:29.118145 systemd[1]: Reloading finished in 214 ms. Apr 21 10:11:29.147071 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:11:29.147803 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:11:29.154315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:11:29.154856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:11:29.158327 systemd[1]: Starting ensure-sysext.service... Apr 21 10:11:29.169394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:11:29.175896 systemd[1]: Reloading requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:11:29.175960 systemd[1]: Reloading... Apr 21 10:11:29.210613 systemd-udevd[1279]: Using default interface naming scheme 'v255'. Apr 21 10:11:29.216618 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:11:29.216896 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:11:29.217695 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:11:29.217899 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 21 10:11:29.217960 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Apr 21 10:11:29.222144 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:11:29.222759 systemd-tmpfiles[1282]: Skipping /boot Apr 21 10:11:29.237760 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:11:29.238209 systemd-tmpfiles[1282]: Skipping /boot Apr 21 10:11:29.245230 zram_generator::config[1309]: No configuration found. Apr 21 10:11:29.376218 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1332) Apr 21 10:11:29.385093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:11:29.404252 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:11:29.437217 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:11:29.443077 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:11:29.443701 systemd[1]: Reloading finished in 267 ms. Apr 21 10:11:29.456718 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:11:29.457588 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:11:29.481217 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:11:29.495906 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 10:11:29.503252 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:29.512012 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:11:29.512271 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:11:29.512408 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:11:29.512559 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:11:29.516222 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:11:29.517393 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:11:29.519129 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:11:29.519874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:11:29.521495 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:11:29.523584 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:11:29.531396 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:11:29.531843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:11:29.533598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:11:29.536812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:11:29.546369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:11:29.548386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:11:29.549258 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:29.551714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:11:29.552203 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:11:29.564704 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:11:29.564876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:11:29.567909 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:29.568315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:11:29.574432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:11:29.574841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:11:29.580263 augenrules[1421]: No rules Apr 21 10:11:29.584958 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:11:29.585626 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:29.587161 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:11:29.588118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:11:29.588293 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:11:29.602482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:11:29.602629 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:11:29.605064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:29.606423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:11:29.613430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:11:29.618355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:11:29.627370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:11:29.627796 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:11:29.627878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:11:29.627947 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:11:29.628763 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:11:29.629349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:11:29.632840 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:11:29.642691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:11:29.647427 systemd[1]: Finished ensure-sysext.service. Apr 21 10:11:29.652213 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:11:29.657477 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:11:29.666620 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:11:29.666764 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:11:29.671160 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:11:29.671375 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:11:29.679392 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:11:29.679747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:11:29.688375 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:11:29.692378 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:11:29.694359 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:29.695890 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:11:29.704242 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 21 10:11:29.708221 kernel: Console: switching to colour dummy device 80x25 Apr 21 10:11:29.717278 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:11:29.721399 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:11:29.722016 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:11:29.725592 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 21 10:11:29.725812 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 10:11:29.725824 kernel: [drm] features: -context_init Apr 21 10:11:29.725921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:11:29.729231 kernel: [drm] number of scanouts: 1 Apr 21 10:11:29.729268 kernel: [drm] number of cap sets: 0 Apr 21 10:11:29.734217 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 10:11:29.737213 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 21 10:11:29.744675 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:11:29.752219 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 10:11:29.760495 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:11:29.760740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:29.766069 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:11:29.794129 systemd-networkd[1404]: lo: Link UP Apr 21 10:11:29.796306 systemd-networkd[1404]: lo: Gained carrier Apr 21 10:11:29.800333 systemd-networkd[1404]: Enumeration completed Apr 21 10:11:29.800477 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:11:29.804333 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:29.804375 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:11:29.806298 systemd-networkd[1404]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:29.806342 systemd-networkd[1404]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:11:29.807287 systemd-networkd[1404]: eth0: Link UP Apr 21 10:11:29.807341 systemd-networkd[1404]: eth0: Gained carrier Apr 21 10:11:29.807377 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:29.808323 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:11:29.809695 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:11:29.810595 systemd-networkd[1404]: eth1: Link UP Apr 21 10:11:29.810644 systemd-networkd[1404]: eth1: Gained carrier Apr 21 10:11:29.810678 systemd-networkd[1404]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:11:29.813349 systemd-resolved[1407]: Positive Trust Anchors: Apr 21 10:11:29.813362 systemd-resolved[1407]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:11:29.813385 systemd-resolved[1407]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:11:29.818287 systemd-resolved[1407]: Using system hostname 'ci-4081-3-7-7-16e5f88171'. Apr 21 10:11:29.818677 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:11:29.822217 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:11:29.822342 systemd[1]: Reached target network.target - Network. Apr 21 10:11:29.822393 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:11:29.823309 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:11:29.823393 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:11:29.829161 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:11:29.839298 systemd-networkd[1404]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:11:29.841427 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Apr 21 10:11:29.849773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:11:29.861044 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:11:29.861670 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:11:29.861764 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:11:29.861928 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:11:29.862019 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:11:29.862413 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:11:29.864226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:11:29.864305 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:11:29.864363 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:11:29.864385 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:11:29.864426 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:11:29.865779 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:11:29.867822 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:11:29.872241 systemd-networkd[1404]: eth0: DHCPv4 address 37.27.23.25/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:11:29.872656 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:11:29.874526 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Apr 21 10:11:29.875335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:11:29.877611 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:11:29.878004 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:11:29.878478 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Apr 21 10:11:29.880699 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:11:29.881047 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:11:29.881070 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:11:29.884281 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:11:29.886245 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:11:29.889604 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:11:29.891174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:11:29.896308 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:11:29.904321 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:11:29.904698 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:11:29.908353 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:11:29.914373 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:11:29.918501 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 10:11:29.920757 coreos-metadata[1472]: Apr 21 10:11:29.920 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 10:11:29.922122 jq[1474]: false Apr 21 10:11:29.922627 coreos-metadata[1472]: Apr 21 10:11:29.922 INFO Fetch successful Apr 21 10:11:29.922708 coreos-metadata[1472]: Apr 21 10:11:29.922 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 10:11:29.926009 coreos-metadata[1472]: Apr 21 10:11:29.925 INFO Fetch successful Apr 21 10:11:29.927653 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:11:29.935375 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:11:29.936355 extend-filesystems[1475]: Found loop4 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found loop5 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found loop6 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found loop7 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda1 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda2 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda3 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found usr Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda4 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda6 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda7 Apr 21 10:11:29.937687 extend-filesystems[1475]: Found sda9 Apr 21 10:11:29.937687 extend-filesystems[1475]: Checking size of /dev/sda9 Apr 21 10:11:29.943072 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:11:29.950375 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:11:29.950780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:11:29.958119 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:11:29.960312 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:11:29.963873 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:11:29.975544 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:11:29.976038 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:11:29.977742 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:11:29.978673 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:11:29.979184 dbus-daemon[1473]: [system] SELinux support is enabled Apr 21 10:11:29.980340 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:11:29.986404 extend-filesystems[1475]: Resized partition /dev/sda9 Apr 21 10:11:30.003784 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 21 10:11:29.998139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:11:30.003942 extend-filesystems[1500]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:11:29.998181 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:11:30.001158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:11:30.001178 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:11:30.025286 jq[1489]: true Apr 21 10:11:30.028842 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:11:30.039070 jq[1515]: true Apr 21 10:11:30.044584 update_engine[1487]: I20260421 10:11:30.044037 1487 main.cc:92] Flatcar Update Engine starting Apr 21 10:11:30.058222 update_engine[1487]: I20260421 10:11:30.051657 1487 update_check_scheduler.cc:74] Next update check in 4m10s Apr 21 10:11:30.052379 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:11:30.058444 tar[1497]: linux-amd64/LICENSE Apr 21 10:11:30.058444 tar[1497]: linux-amd64/helm Apr 21 10:11:30.062450 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:11:30.065708 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:11:30.065887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:11:30.071213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1328) Apr 21 10:11:30.080865 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:11:30.086917 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:11:30.136919 systemd-logind[1486]: New seat seat0. Apr 21 10:11:30.141503 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 10:11:30.141521 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:11:30.142718 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:11:30.214385 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:11:30.217069 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:11:30.231416 systemd[1]: Starting sshkeys.service... Apr 21 10:11:30.247592 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:11:30.257450 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:11:30.270358 containerd[1501]: time="2026-04-21T10:11:30.270290970Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:11:30.281421 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:11:30.305743 coreos-metadata[1551]: Apr 21 10:11:30.305 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 10:11:30.307581 coreos-metadata[1551]: Apr 21 10:11:30.307 INFO Fetch successful Apr 21 10:11:30.308863 containerd[1501]: time="2026-04-21T10:11:30.308827821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311136 containerd[1501]: time="2026-04-21T10:11:30.311114683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311209 containerd[1501]: time="2026-04-21T10:11:30.311184658Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:11:30.311244 containerd[1501]: time="2026-04-21T10:11:30.311236075Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:11:30.311390 containerd[1501]: time="2026-04-21T10:11:30.311379199Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:11:30.311427 containerd[1501]: time="2026-04-21T10:11:30.311418589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311513 containerd[1501]: time="2026-04-21T10:11:30.311502354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311544 containerd[1501]: time="2026-04-21T10:11:30.311536636Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311727 containerd[1501]: time="2026-04-21T10:11:30.311713751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311787 containerd[1501]: time="2026-04-21T10:11:30.311779069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311818 containerd[1501]: time="2026-04-21T10:11:30.311809986Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:11:30.311857 containerd[1501]: time="2026-04-21T10:11:30.311849495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.312010 unknown[1551]: wrote ssh authorized keys file for user: core Apr 21 10:11:30.313058 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:11:30.316221 containerd[1501]: time="2026-04-21T10:11:30.315318019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.316221 containerd[1501]: time="2026-04-21T10:11:30.315513152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:11:30.316221 containerd[1501]: time="2026-04-21T10:11:30.315617088Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:11:30.316221 containerd[1501]: time="2026-04-21T10:11:30.315626572Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:11:30.316221 containerd[1501]: time="2026-04-21T10:11:30.315703367Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:11:30.316221 containerd[1501]: time="2026-04-21T10:11:30.315739922Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:11:30.328704 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:11:30.332136 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:11:30.338574 containerd[1501]: time="2026-04-21T10:11:30.338541885Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:11:30.338948 containerd[1501]: time="2026-04-21T10:11:30.338916096Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:11:30.338948 containerd[1501]: time="2026-04-21T10:11:30.338940503Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:11:30.339022 containerd[1501]: time="2026-04-21T10:11:30.338961024Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:11:30.339022 containerd[1501]: time="2026-04-21T10:11:30.338980373Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:11:30.339119 containerd[1501]: time="2026-04-21T10:11:30.339105380Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:11:30.340242 containerd[1501]: time="2026-04-21T10:11:30.340221705Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:11:30.343021 containerd[1501]: time="2026-04-21T10:11:30.342996489Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:11:30.343207 containerd[1501]: time="2026-04-21T10:11:30.343182327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:11:30.343288 containerd[1501]: time="2026-04-21T10:11:30.343255026Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:11:30.343391 containerd[1501]: time="2026-04-21T10:11:30.343381136Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.343428 containerd[1501]: time="2026-04-21T10:11:30.343420515Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.343535 containerd[1501]: time="2026-04-21T10:11:30.343518732Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.343577 containerd[1501]: time="2026-04-21T10:11:30.343568236Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.343737 containerd[1501]: time="2026-04-21T10:11:30.343726334Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.343772 containerd[1501]: time="2026-04-21T10:11:30.343764711Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.344021 containerd[1501]: time="2026-04-21T10:11:30.343893555Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.344021 containerd[1501]: time="2026-04-21T10:11:30.343937851Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:11:30.344021 containerd[1501]: time="2026-04-21T10:11:30.343955618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344021 containerd[1501]: time="2026-04-21T10:11:30.343966554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344122929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344149078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344159183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344169839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344178452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344521667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344531712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344542648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344552063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344560736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344569989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344615027Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344631962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344642078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.344675 containerd[1501]: time="2026-04-21T10:11:30.344650771Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344783860Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344870060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344906204Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344915077Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344922128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344932433Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344940255Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:11:30.347810 containerd[1501]: time="2026-04-21T10:11:30.344947746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:11:30.347930 containerd[1501]: time="2026-04-21T10:11:30.345398983Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:11:30.347930 containerd[1501]: time="2026-04-21T10:11:30.345464832Z" level=info msg="Connect containerd service" Apr 21 10:11:30.347930 containerd[1501]: time="2026-04-21T10:11:30.345505153Z" level=info msg="using legacy CRI server" Apr 21 10:11:30.347930 containerd[1501]: time="2026-04-21T10:11:30.345510621Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:11:30.347930 containerd[1501]: time="2026-04-21T10:11:30.345602188Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:11:30.349329 containerd[1501]: time="2026-04-21T10:11:30.348305695Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:11:30.349329 containerd[1501]: time="2026-04-21T10:11:30.348551874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:11:30.349329 containerd[1501]: time="2026-04-21T10:11:30.349247408Z" level=info msg="Start subscribing containerd event" Apr 21 10:11:30.349329 containerd[1501]: time="2026-04-21T10:11:30.349288720Z" level=info msg="Start recovering state" Apr 21 10:11:30.348426 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:11:30.349726 containerd[1501]: time="2026-04-21T10:11:30.349526887Z" level=info msg="Start event monitor" Apr 21 10:11:30.349726 containerd[1501]: time="2026-04-21T10:11:30.349538024Z" level=info msg="Start snapshots syncer" Apr 21 10:11:30.349726 containerd[1501]: time="2026-04-21T10:11:30.349545315Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:11:30.349726 containerd[1501]: time="2026-04-21T10:11:30.349551053Z" level=info msg="Start streaming server" Apr 21 10:11:30.348595 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:11:30.350537 containerd[1501]: time="2026-04-21T10:11:30.350500247Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:11:30.351190 containerd[1501]: time="2026-04-21T10:11:30.351075961Z" level=info msg="containerd successfully booted in 0.081346s" Apr 21 10:11:30.356566 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:11:30.357071 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:11:30.361220 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 21 10:11:30.366739 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:11:30.376508 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:11:30.378512 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:11:30.381385 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:11:30.390618 extend-filesystems[1500]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:11:30.390618 extend-filesystems[1500]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:11:30.390618 extend-filesystems[1500]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 21 10:11:30.393890 extend-filesystems[1475]: Resized filesystem in /dev/sda9 Apr 21 10:11:30.393890 extend-filesystems[1475]: Found sr0 Apr 21 10:11:30.392180 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:11:30.396096 update-ssh-keys[1569]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:11:30.392374 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:11:30.394177 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:11:30.399578 systemd[1]: Finished sshkeys.service. Apr 21 10:11:30.637276 tar[1497]: linux-amd64/README.md Apr 21 10:11:30.651744 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:11:30.852664 systemd-networkd[1404]: eth1: Gained IPv6LL Apr 21 10:11:30.853620 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Apr 21 10:11:30.858504 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:11:30.861163 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:11:30.874915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:11:30.880598 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:11:30.921836 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:11:31.236458 systemd-networkd[1404]: eth0: Gained IPv6LL Apr 21 10:11:31.237913 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Apr 21 10:11:31.740432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:31.741897 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:11:31.743948 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:11:31.746748 systemd[1]: Startup finished in 1.547s (kernel) + 5.828s (initrd) + 4.208s (userspace) = 11.585s. Apr 21 10:11:32.194364 kubelet[1603]: E0421 10:11:32.194248 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:11:32.197045 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:11:32.197240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:11:36.160227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:11:36.167933 systemd[1]: Started sshd@0-37.27.23.25:22-50.85.169.122:38392.service - OpenSSH per-connection server daemon (50.85.169.122:38392). Apr 21 10:11:36.398452 sshd[1614]: Accepted publickey for core from 50.85.169.122 port 38392 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:36.402904 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:36.419091 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:11:36.425655 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:11:36.429385 systemd-logind[1486]: New session 1 of user core. Apr 21 10:11:36.464223 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:11:36.473713 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:11:36.478141 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:11:36.591799 systemd[1618]: Queued start job for default target default.target. Apr 21 10:11:36.602335 systemd[1618]: Created slice app.slice - User Application Slice. Apr 21 10:11:36.602358 systemd[1618]: Reached target paths.target - Paths. Apr 21 10:11:36.602370 systemd[1618]: Reached target timers.target - Timers. Apr 21 10:11:36.603629 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:11:36.619073 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:11:36.619210 systemd[1618]: Reached target sockets.target - Sockets. Apr 21 10:11:36.619222 systemd[1618]: Reached target basic.target - Basic System. Apr 21 10:11:36.619261 systemd[1618]: Reached target default.target - Main User Target. Apr 21 10:11:36.619293 systemd[1618]: Startup finished in 129ms. Apr 21 10:11:36.619571 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:11:36.629334 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:11:36.804578 systemd[1]: Started sshd@1-37.27.23.25:22-50.85.169.122:38398.service - OpenSSH per-connection server daemon (50.85.169.122:38398). Apr 21 10:11:37.015250 sshd[1629]: Accepted publickey for core from 50.85.169.122 port 38398 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:37.017448 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:37.024841 systemd-logind[1486]: New session 2 of user core. Apr 21 10:11:37.035504 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:11:37.180019 sshd[1629]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:37.184495 systemd[1]: sshd@1-37.27.23.25:22-50.85.169.122:38398.service: Deactivated successfully. Apr 21 10:11:37.187761 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:11:37.190495 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:11:37.191813 systemd-logind[1486]: Removed session 2. Apr 21 10:11:37.228619 systemd[1]: Started sshd@2-37.27.23.25:22-50.85.169.122:38408.service - OpenSSH per-connection server daemon (50.85.169.122:38408). Apr 21 10:11:37.442978 sshd[1636]: Accepted publickey for core from 50.85.169.122 port 38408 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:37.444122 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:37.451605 systemd-logind[1486]: New session 3 of user core. Apr 21 10:11:37.462431 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:11:37.604776 sshd[1636]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:37.608856 systemd[1]: sshd@2-37.27.23.25:22-50.85.169.122:38408.service: Deactivated successfully. Apr 21 10:11:37.611109 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:11:37.612852 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:11:37.614471 systemd-logind[1486]: Removed session 3. Apr 21 10:11:37.651453 systemd[1]: Started sshd@3-37.27.23.25:22-50.85.169.122:38424.service - OpenSSH per-connection server daemon (50.85.169.122:38424). Apr 21 10:11:37.876229 sshd[1643]: Accepted publickey for core from 50.85.169.122 port 38424 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:37.877610 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:37.883594 systemd-logind[1486]: New session 4 of user core. Apr 21 10:11:37.892367 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:11:38.045757 sshd[1643]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:38.051156 systemd[1]: sshd@3-37.27.23.25:22-50.85.169.122:38424.service: Deactivated successfully. Apr 21 10:11:38.054271 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:11:38.057108 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:11:38.059021 systemd-logind[1486]: Removed session 4. Apr 21 10:11:38.090868 systemd[1]: Started sshd@4-37.27.23.25:22-50.85.169.122:38430.service - OpenSSH per-connection server daemon (50.85.169.122:38430). Apr 21 10:11:38.300181 sshd[1650]: Accepted publickey for core from 50.85.169.122 port 38430 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:38.301392 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:38.310162 systemd-logind[1486]: New session 5 of user core. Apr 21 10:11:38.316436 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:11:38.446346 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:11:38.446710 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:11:38.463460 sudo[1653]: pam_unix(sudo:session): session closed for user root Apr 21 10:11:38.495941 sshd[1650]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:38.501423 systemd[1]: sshd@4-37.27.23.25:22-50.85.169.122:38430.service: Deactivated successfully. Apr 21 10:11:38.505487 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:11:38.507797 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:11:38.510436 systemd-logind[1486]: Removed session 5. Apr 21 10:11:38.546602 systemd[1]: Started sshd@5-37.27.23.25:22-50.85.169.122:40050.service - OpenSSH per-connection server daemon (50.85.169.122:40050). Apr 21 10:11:38.762236 sshd[1658]: Accepted publickey for core from 50.85.169.122 port 40050 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:38.764146 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:38.772349 systemd-logind[1486]: New session 6 of user core. Apr 21 10:11:38.786457 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:11:38.911605 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:11:38.912329 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:11:38.919181 sudo[1662]: pam_unix(sudo:session): session closed for user root Apr 21 10:11:38.931030 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:11:38.931815 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:11:38.952558 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:11:38.968236 auditctl[1665]: No rules Apr 21 10:11:38.968695 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:11:38.968941 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:11:38.979549 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:11:39.010413 augenrules[1683]: No rules Apr 21 10:11:39.012520 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:11:39.014111 sudo[1661]: pam_unix(sudo:session): session closed for user root Apr 21 10:11:39.045803 sshd[1658]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:39.049455 systemd[1]: sshd@5-37.27.23.25:22-50.85.169.122:40050.service: Deactivated successfully. Apr 21 10:11:39.051747 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:11:39.053303 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:11:39.054713 systemd-logind[1486]: Removed session 6. Apr 21 10:11:39.098620 systemd[1]: Started sshd@6-37.27.23.25:22-50.85.169.122:40066.service - OpenSSH per-connection server daemon (50.85.169.122:40066). Apr 21 10:11:39.310359 sshd[1691]: Accepted publickey for core from 50.85.169.122 port 40066 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:39.313095 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:39.321546 systemd-logind[1486]: New session 7 of user core. Apr 21 10:11:39.333324 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:11:39.452534 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:11:39.453146 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:11:39.762398 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:11:39.772918 (dockerd)[1710]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:11:40.103833 dockerd[1710]: time="2026-04-21T10:11:40.103691998Z" level=info msg="Starting up" Apr 21 10:11:40.207759 dockerd[1710]: time="2026-04-21T10:11:40.207725431Z" level=info msg="Loading containers: start." Apr 21 10:11:40.308226 kernel: Initializing XFRM netlink socket Apr 21 10:11:40.332881 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Apr 21 10:11:41.503157 systemd-resolved[1407]: Clock change detected. Flushing caches. Apr 21 10:11:41.503417 systemd-timesyncd[1443]: Contacted time server 139.162.156.95:123 (2.flatcar.pool.ntp.org). Apr 21 10:11:41.504575 systemd-timesyncd[1443]: Initial clock synchronization to Tue 2026-04-21 10:11:41.502064 UTC. Apr 21 10:11:41.526440 systemd-networkd[1404]: docker0: Link UP Apr 21 10:11:41.537866 dockerd[1710]: time="2026-04-21T10:11:41.537826685Z" level=info msg="Loading containers: done." Apr 21 10:11:41.551520 dockerd[1710]: time="2026-04-21T10:11:41.551472289Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:11:41.551674 dockerd[1710]: time="2026-04-21T10:11:41.551572630Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:11:41.551694 dockerd[1710]: time="2026-04-21T10:11:41.551678789Z" level=info msg="Daemon has completed initialization" Apr 21 10:11:41.579778 dockerd[1710]: time="2026-04-21T10:11:41.579721325Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:11:41.579992 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:11:42.068608 containerd[1501]: time="2026-04-21T10:11:42.068557533Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 10:11:42.677698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2667030764.mount: Deactivated successfully. Apr 21 10:11:43.589959 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:11:43.596079 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:11:43.724132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:43.727534 (kubelet)[1914]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:11:43.766563 kubelet[1914]: E0421 10:11:43.766493 1914 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:11:43.770908 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:11:43.771081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:11:43.850922 containerd[1501]: time="2026-04-21T10:11:43.850799047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:43.852000 containerd[1501]: time="2026-04-21T10:11:43.851877385Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579523" Apr 21 10:11:43.852986 containerd[1501]: time="2026-04-21T10:11:43.852876735Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:43.855069 containerd[1501]: time="2026-04-21T10:11:43.855032029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:43.855929 containerd[1501]: time="2026-04-21T10:11:43.855778048Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.787184351s" Apr 21 10:11:43.855929 containerd[1501]: time="2026-04-21T10:11:43.855804798Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 10:11:43.856728 containerd[1501]: time="2026-04-21T10:11:43.856708504Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 10:11:44.967866 containerd[1501]: time="2026-04-21T10:11:44.967819133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:44.968962 containerd[1501]: time="2026-04-21T10:11:44.968830951Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451681" Apr 21 10:11:44.970031 containerd[1501]: time="2026-04-21T10:11:44.969966675Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:44.972032 containerd[1501]: time="2026-04-21T10:11:44.972005484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:44.972798 containerd[1501]: time="2026-04-21T10:11:44.972667617Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.115935098s" Apr 21 10:11:44.972798 containerd[1501]: time="2026-04-21T10:11:44.972691353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 10:11:44.973155 containerd[1501]: time="2026-04-21T10:11:44.973136411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 10:11:45.977782 containerd[1501]: time="2026-04-21T10:11:45.977710681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:45.978860 containerd[1501]: time="2026-04-21T10:11:45.978705033Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555312" Apr 21 10:11:45.979802 containerd[1501]: time="2026-04-21T10:11:45.979499054Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:45.981966 containerd[1501]: time="2026-04-21T10:11:45.981936020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:45.982951 containerd[1501]: time="2026-04-21T10:11:45.982917363Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 1.009757607s" Apr 21 10:11:45.982999 containerd[1501]: time="2026-04-21T10:11:45.982951144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 10:11:45.983438 containerd[1501]: time="2026-04-21T10:11:45.983392616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 10:11:47.035597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757006591.mount: Deactivated successfully. Apr 21 10:11:47.249138 containerd[1501]: time="2026-04-21T10:11:47.249085032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:47.250318 containerd[1501]: time="2026-04-21T10:11:47.250233856Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699953" Apr 21 10:11:47.251463 containerd[1501]: time="2026-04-21T10:11:47.251181107Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:47.252821 containerd[1501]: time="2026-04-21T10:11:47.252790011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:47.253497 containerd[1501]: time="2026-04-21T10:11:47.253177693Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.269757475s" Apr 21 10:11:47.253497 containerd[1501]: time="2026-04-21T10:11:47.253201288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 10:11:47.253884 containerd[1501]: time="2026-04-21T10:11:47.253849440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 10:11:47.809318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188533771.mount: Deactivated successfully. Apr 21 10:11:48.694149 containerd[1501]: time="2026-04-21T10:11:48.694090712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:48.695419 containerd[1501]: time="2026-04-21T10:11:48.695163121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556642" Apr 21 10:11:48.697371 containerd[1501]: time="2026-04-21T10:11:48.696466847Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:48.698981 containerd[1501]: time="2026-04-21T10:11:48.698949392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:48.700458 containerd[1501]: time="2026-04-21T10:11:48.699824976Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.445956447s" Apr 21 10:11:48.700458 containerd[1501]: time="2026-04-21T10:11:48.699852007Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 10:11:48.700458 containerd[1501]: time="2026-04-21T10:11:48.700274671Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 10:11:49.243557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052358860.mount: Deactivated successfully. Apr 21 10:11:49.253382 containerd[1501]: time="2026-04-21T10:11:49.251787129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:49.253382 containerd[1501]: time="2026-04-21T10:11:49.253314130Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 21 10:11:49.253934 containerd[1501]: time="2026-04-21T10:11:49.253901522Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:49.257193 containerd[1501]: time="2026-04-21T10:11:49.257155614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:49.258736 containerd[1501]: time="2026-04-21T10:11:49.258671559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 558.322887ms" Apr 21 10:11:49.258908 containerd[1501]: time="2026-04-21T10:11:49.258737678Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 10:11:49.260094 containerd[1501]: time="2026-04-21T10:11:49.260046201Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 10:11:49.881757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702517564.mount: Deactivated successfully. Apr 21 10:11:50.575846 containerd[1501]: time="2026-04-21T10:11:50.575792799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:50.576699 containerd[1501]: time="2026-04-21T10:11:50.576631618Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644553" Apr 21 10:11:50.577977 containerd[1501]: time="2026-04-21T10:11:50.577323215Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:50.579639 containerd[1501]: time="2026-04-21T10:11:50.579255825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:11:50.580227 containerd[1501]: time="2026-04-21T10:11:50.579994994Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.319865327s" Apr 21 10:11:50.580227 containerd[1501]: time="2026-04-21T10:11:50.580020222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 10:11:51.418693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:51.424783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:11:51.447851 systemd[1]: Reloading requested from client PID 2086 ('systemctl') (unit session-7.scope)... Apr 21 10:11:51.447964 systemd[1]: Reloading... Apr 21 10:11:51.549634 zram_generator::config[2126]: No configuration found. Apr 21 10:11:51.631759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:11:51.694213 systemd[1]: Reloading finished in 245 ms. Apr 21 10:11:51.736685 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:11:51.736780 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:11:51.737013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:51.744097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:11:51.888870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:51.892144 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:11:51.921919 kubelet[2179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:11:52.097762 kubelet[2179]: I0421 10:11:52.097588 2179 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:11:52.097762 kubelet[2179]: I0421 10:11:52.097629 2179 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:11:52.097762 kubelet[2179]: I0421 10:11:52.097646 2179 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:11:52.097762 kubelet[2179]: I0421 10:11:52.097651 2179 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:11:52.098163 kubelet[2179]: I0421 10:11:52.097801 2179 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:11:52.105657 kubelet[2179]: I0421 10:11:52.103702 2179 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:11:52.105657 kubelet[2179]: E0421 10:11:52.105151 2179 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://37.27.23.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.23.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:11:52.109963 kubelet[2179]: E0421 10:11:52.109909 2179 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:11:52.110165 kubelet[2179]: I0421 10:11:52.110145 2179 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:11:52.118985 kubelet[2179]: I0421 10:11:52.118952 2179 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:11:52.119703 kubelet[2179]: I0421 10:11:52.119606 2179 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:11:52.119796 kubelet[2179]: I0421 10:11:52.119646 2179 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-7-16e5f88171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:11:52.119796 kubelet[2179]: I0421 10:11:52.119777 2179 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:11:52.119796 kubelet[2179]: I0421 10:11:52.119784 2179 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:11:52.120005 kubelet[2179]: I0421 10:11:52.119870 2179 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:11:52.121884 kubelet[2179]: I0421 10:11:52.121851 2179 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:11:52.122026 kubelet[2179]: I0421 10:11:52.122001 2179 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:11:52.122026 kubelet[2179]: I0421 10:11:52.122014 2179 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:11:52.122151 kubelet[2179]: I0421 10:11:52.122033 2179 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:11:52.122151 kubelet[2179]: I0421 10:11:52.122040 2179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:11:52.124854 kubelet[2179]: I0421 10:11:52.124823 2179 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:11:52.127661 kubelet[2179]: I0421 10:11:52.126715 2179 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:11:52.127661 kubelet[2179]: I0421 10:11:52.126738 2179 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:11:52.127661 kubelet[2179]: W0421 10:11:52.126789 2179 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:11:52.129637 kubelet[2179]: I0421 10:11:52.129114 2179 server.go:1257] "Started kubelet" Apr 21 10:11:52.137390 kubelet[2179]: I0421 10:11:52.137373 2179 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:11:52.138460 kubelet[2179]: E0421 10:11:52.135052 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.23.25:6443/api/v1/namespaces/default/events\": dial tcp 37.27.23.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-7-16e5f88171.18a8578c1355e3bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-7-16e5f88171,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-7-16e5f88171,},FirstTimestamp:2026-04-21 10:11:52.129094589 +0000 UTC m=+0.233763113,LastTimestamp:2026-04-21 10:11:52.129094589 +0000 UTC m=+0.233763113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-7-16e5f88171,}" Apr 21 10:11:52.142072 kubelet[2179]: I0421 10:11:52.142033 2179 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:11:52.143338 kubelet[2179]: I0421 10:11:52.143307 2179 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:11:52.146054 kubelet[2179]: I0421 10:11:52.146043 2179 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:11:52.146296 kubelet[2179]: E0421 10:11:52.146278 2179 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-7-16e5f88171\" not found" Apr 21 10:11:52.148743 kubelet[2179]: I0421 10:11:52.148726 2179 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:11:52.148855 kubelet[2179]: I0421 10:11:52.148847 2179 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:11:52.149463 kubelet[2179]: I0421 10:11:52.149376 2179 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:11:52.149496 kubelet[2179]: I0421 10:11:52.149486 2179 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:11:52.149859 kubelet[2179]: I0421 10:11:52.149836 2179 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:11:52.150207 kubelet[2179]: I0421 10:11:52.150170 2179 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:11:52.151572 kubelet[2179]: E0421 10:11:52.151493 2179 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.23.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-16e5f88171?timeout=10s\": dial tcp 37.27.23.25:6443: connect: connection refused" interval="200ms" Apr 21 10:11:52.152731 kubelet[2179]: I0421 10:11:52.152701 2179 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:11:52.153722 kubelet[2179]: I0421 10:11:52.152868 2179 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:11:52.154799 kubelet[2179]: I0421 10:11:52.154733 2179 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:11:52.172387 kubelet[2179]: E0421 10:11:52.172320 2179 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:11:52.175375 kubelet[2179]: I0421 10:11:52.175109 2179 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:11:52.177711 kubelet[2179]: I0421 10:11:52.177698 2179 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:11:52.177784 kubelet[2179]: I0421 10:11:52.177777 2179 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:11:52.177833 kubelet[2179]: I0421 10:11:52.177827 2179 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:11:52.177927 kubelet[2179]: E0421 10:11:52.177915 2179 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:11:52.184095 kubelet[2179]: I0421 10:11:52.184073 2179 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:11:52.184095 kubelet[2179]: I0421 10:11:52.184084 2179 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:11:52.184453 kubelet[2179]: I0421 10:11:52.184099 2179 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:11:52.187056 kubelet[2179]: I0421 10:11:52.186984 2179 policy_none.go:50] "Start" Apr 21 10:11:52.187056 kubelet[2179]: I0421 10:11:52.186998 2179 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:11:52.187056 kubelet[2179]: I0421 10:11:52.187008 2179 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:11:52.187796 kubelet[2179]: I0421 10:11:52.187778 2179 policy_none.go:44] "Start" Apr 21 10:11:52.191013 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:11:52.205329 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:11:52.207937 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:11:52.221393 kubelet[2179]: E0421 10:11:52.221376 2179 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:11:52.221688 kubelet[2179]: I0421 10:11:52.221670 2179 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:11:52.221762 kubelet[2179]: I0421 10:11:52.221735 2179 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:11:52.222100 kubelet[2179]: I0421 10:11:52.222091 2179 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:11:52.223871 kubelet[2179]: E0421 10:11:52.223860 2179 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:11:52.223943 kubelet[2179]: E0421 10:11:52.223936 2179 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-7-16e5f88171\" not found" Apr 21 10:11:52.293808 systemd[1]: Created slice kubepods-burstable-poda744c92ced63e68c2611b8b2cf47573f.slice - libcontainer container kubepods-burstable-poda744c92ced63e68c2611b8b2cf47573f.slice. Apr 21 10:11:52.304967 kubelet[2179]: E0421 10:11:52.304918 2179 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.308670 systemd[1]: Created slice kubepods-burstable-podc9829fcb09461e901f9992aa024e97c8.slice - libcontainer container kubepods-burstable-podc9829fcb09461e901f9992aa024e97c8.slice. Apr 21 10:11:52.311360 kubelet[2179]: E0421 10:11:52.311328 2179 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.313387 systemd[1]: Created slice kubepods-burstable-pod9285ce562c009e20953c923b3b29b66e.slice - libcontainer container kubepods-burstable-pod9285ce562c009e20953c923b3b29b66e.slice. Apr 21 10:11:52.315565 kubelet[2179]: E0421 10:11:52.315538 2179 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.323894 kubelet[2179]: I0421 10:11:52.323608 2179 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.324098 kubelet[2179]: E0421 10:11:52.324067 2179 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://37.27.23.25:6443/api/v1/nodes\": dial tcp 37.27.23.25:6443: connect: connection refused" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.352985 kubelet[2179]: E0421 10:11:52.352827 2179 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.23.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-16e5f88171?timeout=10s\": dial tcp 37.27.23.25:6443: connect: connection refused" interval="400ms" Apr 21 10:11:52.449735 kubelet[2179]: I0421 10:11:52.449665 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.449735 kubelet[2179]: I0421 10:11:52.449701 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.449735 kubelet[2179]: I0421 10:11:52.449716 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.449735 kubelet[2179]: I0421 10:11:52.449733 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a744c92ced63e68c2611b8b2cf47573f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" (UID: \"a744c92ced63e68c2611b8b2cf47573f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.449735 kubelet[2179]: I0421 10:11:52.449752 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.450182 kubelet[2179]: I0421 10:11:52.449767 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.450182 kubelet[2179]: I0421 10:11:52.449779 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9285ce562c009e20953c923b3b29b66e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-7-16e5f88171\" (UID: \"9285ce562c009e20953c923b3b29b66e\") " pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.450182 kubelet[2179]: I0421 10:11:52.449788 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a744c92ced63e68c2611b8b2cf47573f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" (UID: \"a744c92ced63e68c2611b8b2cf47573f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.450182 kubelet[2179]: I0421 10:11:52.449800 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a744c92ced63e68c2611b8b2cf47573f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" (UID: \"a744c92ced63e68c2611b8b2cf47573f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.527554 kubelet[2179]: I0421 10:11:52.527472 2179 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.528032 kubelet[2179]: E0421 10:11:52.527941 2179 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://37.27.23.25:6443/api/v1/nodes\": dial tcp 37.27.23.25:6443: connect: connection refused" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.621854 containerd[1501]: time="2026-04-21T10:11:52.621715365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-7-16e5f88171,Uid:a744c92ced63e68c2611b8b2cf47573f,Namespace:kube-system,Attempt:0,}" Apr 21 10:11:52.626657 containerd[1501]: time="2026-04-21T10:11:52.626266634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-7-16e5f88171,Uid:c9829fcb09461e901f9992aa024e97c8,Namespace:kube-system,Attempt:0,}" Apr 21 10:11:52.627753 containerd[1501]: time="2026-04-21T10:11:52.627605763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-7-16e5f88171,Uid:9285ce562c009e20953c923b3b29b66e,Namespace:kube-system,Attempt:0,}" Apr 21 10:11:52.753859 kubelet[2179]: E0421 10:11:52.753794 2179 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.23.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-16e5f88171?timeout=10s\": dial tcp 37.27.23.25:6443: connect: connection refused" interval="800ms" Apr 21 10:11:52.931565 kubelet[2179]: I0421 10:11:52.931388 2179 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:52.932209 kubelet[2179]: E0421 10:11:52.932079 2179 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://37.27.23.25:6443/api/v1/nodes\": dial tcp 37.27.23.25:6443: connect: connection refused" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:53.012135 kubelet[2179]: E0421 10:11:53.011998 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.23.25:6443/api/v1/namespaces/default/events\": dial tcp 37.27.23.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-7-16e5f88171.18a8578c1355e3bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-7-16e5f88171,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-7-16e5f88171,},FirstTimestamp:2026-04-21 10:11:52.129094589 +0000 UTC m=+0.233763113,LastTimestamp:2026-04-21 10:11:52.129094589 +0000 UTC m=+0.233763113,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-7-16e5f88171,}" Apr 21 10:11:53.130788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1219842823.mount: Deactivated successfully. Apr 21 10:11:53.139437 containerd[1501]: time="2026-04-21T10:11:53.139351345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:11:53.142736 containerd[1501]: time="2026-04-21T10:11:53.142606819Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 21 10:11:53.143552 containerd[1501]: time="2026-04-21T10:11:53.143511246Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:11:53.144803 containerd[1501]: time="2026-04-21T10:11:53.144742925Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:11:53.146603 containerd[1501]: time="2026-04-21T10:11:53.146563196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:11:53.148345 containerd[1501]: time="2026-04-21T10:11:53.148240192Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:11:53.149374 containerd[1501]: time="2026-04-21T10:11:53.149306672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:11:53.156660 containerd[1501]: time="2026-04-21T10:11:53.155156089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:11:53.156660 containerd[1501]: time="2026-04-21T10:11:53.156417471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.029174ms" Apr 21 10:11:53.159028 containerd[1501]: time="2026-04-21T10:11:53.158975219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 531.234914ms" Apr 21 10:11:53.160338 containerd[1501]: time="2026-04-21T10:11:53.160298114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.927545ms" Apr 21 10:11:53.275578 containerd[1501]: time="2026-04-21T10:11:53.274844942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:11:53.275578 containerd[1501]: time="2026-04-21T10:11:53.274893675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:11:53.275578 containerd[1501]: time="2026-04-21T10:11:53.274915798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:11:53.275578 containerd[1501]: time="2026-04-21T10:11:53.275031502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:11:53.277447 containerd[1501]: time="2026-04-21T10:11:53.277368688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:11:53.278573 containerd[1501]: time="2026-04-21T10:11:53.277568799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:11:53.278573 containerd[1501]: time="2026-04-21T10:11:53.278373897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:11:53.278573 containerd[1501]: time="2026-04-21T10:11:53.278487838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:11:53.285469 containerd[1501]: time="2026-04-21T10:11:53.284356353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:11:53.285469 containerd[1501]: time="2026-04-21T10:11:53.284418616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:11:53.285469 containerd[1501]: time="2026-04-21T10:11:53.284431756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:11:53.285469 containerd[1501]: time="2026-04-21T10:11:53.284513398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:11:53.309758 systemd[1]: Started cri-containerd-1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294.scope - libcontainer container 1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294. Apr 21 10:11:53.312129 systemd[1]: Started cri-containerd-967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021.scope - libcontainer container 967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021. Apr 21 10:11:53.316995 systemd[1]: Started cri-containerd-72da53be2c21b795556d2780498e2c5a030032aff50d4a7214171e1b6ab4c8bf.scope - libcontainer container 72da53be2c21b795556d2780498e2c5a030032aff50d4a7214171e1b6ab4c8bf. Apr 21 10:11:53.352579 containerd[1501]: time="2026-04-21T10:11:53.351447107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-7-16e5f88171,Uid:a744c92ced63e68c2611b8b2cf47573f,Namespace:kube-system,Attempt:0,} returns sandbox id \"72da53be2c21b795556d2780498e2c5a030032aff50d4a7214171e1b6ab4c8bf\"" Apr 21 10:11:53.357686 containerd[1501]: time="2026-04-21T10:11:53.357578997Z" level=info msg="CreateContainer within sandbox \"72da53be2c21b795556d2780498e2c5a030032aff50d4a7214171e1b6ab4c8bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:11:53.372872 containerd[1501]: time="2026-04-21T10:11:53.372841277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-7-16e5f88171,Uid:9285ce562c009e20953c923b3b29b66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021\"" Apr 21 10:11:53.377804 containerd[1501]: time="2026-04-21T10:11:53.377633167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-7-16e5f88171,Uid:c9829fcb09461e901f9992aa024e97c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294\"" Apr 21 10:11:53.381734 containerd[1501]: time="2026-04-21T10:11:53.381708752Z" level=info msg="CreateContainer within sandbox \"1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:11:53.382633 containerd[1501]: time="2026-04-21T10:11:53.382593590Z" level=info msg="CreateContainer within sandbox \"967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:11:53.384002 containerd[1501]: time="2026-04-21T10:11:53.383974291Z" level=info msg="CreateContainer within sandbox \"72da53be2c21b795556d2780498e2c5a030032aff50d4a7214171e1b6ab4c8bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"592c800c53099c43e502889ec0104c6387c93790cf2ece1beb442b2d6f2bec63\"" Apr 21 10:11:53.385064 containerd[1501]: time="2026-04-21T10:11:53.384920821Z" level=info msg="StartContainer for \"592c800c53099c43e502889ec0104c6387c93790cf2ece1beb442b2d6f2bec63\"" Apr 21 10:11:53.395077 containerd[1501]: time="2026-04-21T10:11:53.395038542Z" level=info msg="CreateContainer within sandbox \"1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021\"" Apr 21 10:11:53.395825 containerd[1501]: time="2026-04-21T10:11:53.395803129Z" level=info msg="StartContainer for \"f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021\"" Apr 21 10:11:53.400927 containerd[1501]: time="2026-04-21T10:11:53.400829301Z" level=info msg="CreateContainer within sandbox \"967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b\"" Apr 21 10:11:53.405486 containerd[1501]: time="2026-04-21T10:11:53.405442983Z" level=info msg="StartContainer for \"572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b\"" Apr 21 10:11:53.411760 systemd[1]: Started cri-containerd-592c800c53099c43e502889ec0104c6387c93790cf2ece1beb442b2d6f2bec63.scope - libcontainer container 592c800c53099c43e502889ec0104c6387c93790cf2ece1beb442b2d6f2bec63. Apr 21 10:11:53.432838 systemd[1]: Started cri-containerd-f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021.scope - libcontainer container f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021. Apr 21 10:11:53.446766 systemd[1]: Started cri-containerd-572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b.scope - libcontainer container 572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b. Apr 21 10:11:53.471409 containerd[1501]: time="2026-04-21T10:11:53.471366015Z" level=info msg="StartContainer for \"592c800c53099c43e502889ec0104c6387c93790cf2ece1beb442b2d6f2bec63\" returns successfully" Apr 21 10:11:53.507375 containerd[1501]: time="2026-04-21T10:11:53.507333381Z" level=info msg="StartContainer for \"f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021\" returns successfully" Apr 21 10:11:53.521662 containerd[1501]: time="2026-04-21T10:11:53.520491213Z" level=info msg="StartContainer for \"572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b\" returns successfully" Apr 21 10:11:53.734552 kubelet[2179]: I0421 10:11:53.733953 2179 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.197306 kubelet[2179]: E0421 10:11:54.197216 2179 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.200237 kubelet[2179]: E0421 10:11:54.200211 2179 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.202588 kubelet[2179]: E0421 10:11:54.202567 2179 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.256668 kubelet[2179]: E0421 10:11:54.256603 2179 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-7-16e5f88171\" not found" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.341078 kubelet[2179]: I0421 10:11:54.340963 2179 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.341078 kubelet[2179]: E0421 10:11:54.341041 2179 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-7-16e5f88171\": node \"ci-4081-3-7-7-16e5f88171\" not found" Apr 21 10:11:54.347024 kubelet[2179]: I0421 10:11:54.346993 2179 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.417347 kubelet[2179]: E0421 10:11:54.417167 2179 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.417347 kubelet[2179]: I0421 10:11:54.417194 2179 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.420355 kubelet[2179]: E0421 10:11:54.420156 2179 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-7-16e5f88171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.420355 kubelet[2179]: I0421 10:11:54.420179 2179 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:54.426113 kubelet[2179]: E0421 10:11:54.426072 2179 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:55.126556 kubelet[2179]: I0421 10:11:55.126488 2179 apiserver.go:52] "Watching apiserver" Apr 21 10:11:55.149668 kubelet[2179]: I0421 10:11:55.149578 2179 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:11:55.204498 kubelet[2179]: I0421 10:11:55.204408 2179 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:55.205312 kubelet[2179]: I0421 10:11:55.205285 2179 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:56.375514 systemd[1]: Reloading requested from client PID 2466 ('systemctl') (unit session-7.scope)... Apr 21 10:11:56.375537 systemd[1]: Reloading... Apr 21 10:11:56.491661 zram_generator::config[2521]: No configuration found. Apr 21 10:11:56.562735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:11:56.633525 systemd[1]: Reloading finished in 257 ms. Apr 21 10:11:56.675117 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:11:56.690101 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:11:56.690303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:56.696834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:11:56.831360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:11:56.836950 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:11:56.868945 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:11:56.878835 kubelet[2557]: I0421 10:11:56.878794 2557 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 10:11:56.878945 kubelet[2557]: I0421 10:11:56.878937 2557 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:11:56.878987 kubelet[2557]: I0421 10:11:56.878982 2557 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 10:11:56.879013 kubelet[2557]: I0421 10:11:56.879006 2557 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:11:56.879227 kubelet[2557]: I0421 10:11:56.879220 2557 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 10:11:56.881216 kubelet[2557]: I0421 10:11:56.881204 2557 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:11:56.884857 kubelet[2557]: I0421 10:11:56.884770 2557 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:11:56.887173 kubelet[2557]: E0421 10:11:56.887124 2557 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:11:56.887216 kubelet[2557]: I0421 10:11:56.887192 2557 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 21 10:11:56.891015 kubelet[2557]: I0421 10:11:56.890986 2557 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 10:11:56.891221 kubelet[2557]: I0421 10:11:56.891189 2557 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:11:56.891367 kubelet[2557]: I0421 10:11:56.891212 2557 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-7-16e5f88171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:11:56.891367 kubelet[2557]: I0421 10:11:56.891346 2557 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 10:11:56.891367 kubelet[2557]: I0421 10:11:56.891356 2557 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 10:11:56.891367 kubelet[2557]: I0421 10:11:56.891381 2557 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 10:11:56.891750 kubelet[2557]: I0421 10:11:56.891514 2557 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 10:11:56.891750 kubelet[2557]: I0421 10:11:56.891648 2557 kubelet.go:482] "Attempting to sync node with API server" Apr 21 10:11:56.891750 kubelet[2557]: I0421 10:11:56.891660 2557 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:11:56.891750 kubelet[2557]: I0421 10:11:56.891672 2557 kubelet.go:394] "Adding apiserver pod source" Apr 21 10:11:56.891750 kubelet[2557]: I0421 10:11:56.891679 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:11:56.902091 kubelet[2557]: I0421 10:11:56.901953 2557 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:11:56.903764 kubelet[2557]: I0421 10:11:56.903752 2557 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:11:56.903833 kubelet[2557]: I0421 10:11:56.903826 2557 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 10:11:56.906538 kubelet[2557]: I0421 10:11:56.906496 2557 server.go:1257] "Started kubelet" Apr 21 10:11:56.908519 kubelet[2557]: I0421 10:11:56.908493 2557 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 10:11:56.908603 kubelet[2557]: I0421 10:11:56.908550 2557 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:11:56.909283 kubelet[2557]: I0421 10:11:56.909265 2557 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:11:56.913080 kubelet[2557]: I0421 10:11:56.912929 2557 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:11:56.913080 kubelet[2557]: I0421 10:11:56.912976 2557 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 10:11:56.913151 kubelet[2557]: I0421 10:11:56.913110 2557 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:11:56.916065 kubelet[2557]: I0421 10:11:56.914852 2557 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:11:56.920110 kubelet[2557]: I0421 10:11:56.919168 2557 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 10:11:56.920110 kubelet[2557]: I0421 10:11:56.919243 2557 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 10:11:56.920110 kubelet[2557]: I0421 10:11:56.919339 2557 reconciler.go:29] "Reconciler: start to sync state" Apr 21 10:11:56.922004 kubelet[2557]: I0421 10:11:56.921644 2557 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:11:56.922004 kubelet[2557]: I0421 10:11:56.921659 2557 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:11:56.922004 kubelet[2557]: I0421 10:11:56.921715 2557 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:11:56.924433 kubelet[2557]: I0421 10:11:56.924409 2557 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 10:11:56.925479 kubelet[2557]: I0421 10:11:56.925467 2557 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 10:11:56.925526 kubelet[2557]: I0421 10:11:56.925519 2557 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 10:11:56.925573 kubelet[2557]: I0421 10:11:56.925567 2557 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 10:11:56.925673 kubelet[2557]: E0421 10:11:56.925658 2557 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:11:56.941966 kubelet[2557]: E0421 10:11:56.941949 2557 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:11:56.971766 kubelet[2557]: I0421 10:11:56.971748 2557 cpu_manager.go:225] "Starting" policy="none" Apr 21 10:11:56.972025 kubelet[2557]: I0421 10:11:56.972014 2557 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 10:11:56.972075 kubelet[2557]: I0421 10:11:56.972068 2557 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 10:11:56.972212 kubelet[2557]: I0421 10:11:56.972202 2557 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 10:11:56.972254 kubelet[2557]: I0421 10:11:56.972238 2557 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 10:11:56.972280 kubelet[2557]: I0421 10:11:56.972275 2557 policy_none.go:50] "Start" Apr 21 10:11:56.972337 kubelet[2557]: I0421 10:11:56.972331 2557 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 10:11:56.972380 kubelet[2557]: I0421 10:11:56.972365 2557 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 10:11:56.972529 kubelet[2557]: I0421 10:11:56.972477 2557 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 10:11:56.972529 kubelet[2557]: I0421 10:11:56.972485 2557 policy_none.go:44] "Start" Apr 21 10:11:56.977332 kubelet[2557]: E0421 10:11:56.976077 2557 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:11:56.977332 kubelet[2557]: I0421 10:11:56.976209 2557 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 10:11:56.977332 kubelet[2557]: I0421 10:11:56.976218 2557 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:11:56.977332 kubelet[2557]: I0421 10:11:56.976558 2557 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 10:11:56.977954 kubelet[2557]: E0421 10:11:56.977684 2557 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:11:57.026294 kubelet[2557]: I0421 10:11:57.026270 2557 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.026521 kubelet[2557]: I0421 10:11:57.026302 2557 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.026654 kubelet[2557]: I0421 10:11:57.026354 2557 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.033688 kubelet[2557]: E0421 10:11:57.033668 2557 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-7-16e5f88171\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.034341 kubelet[2557]: E0421 10:11:57.034318 2557 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.083403 kubelet[2557]: I0421 10:11:57.083338 2557 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.093213 kubelet[2557]: I0421 10:11:57.093188 2557 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.093471 kubelet[2557]: I0421 10:11:57.093387 2557 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.119957 kubelet[2557]: I0421 10:11:57.119771 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a744c92ced63e68c2611b8b2cf47573f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" (UID: \"a744c92ced63e68c2611b8b2cf47573f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.119957 kubelet[2557]: I0421 10:11:57.119861 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a744c92ced63e68c2611b8b2cf47573f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" (UID: \"a744c92ced63e68c2611b8b2cf47573f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.119957 kubelet[2557]: I0421 10:11:57.119916 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a744c92ced63e68c2611b8b2cf47573f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" (UID: \"a744c92ced63e68c2611b8b2cf47573f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.220633 kubelet[2557]: I0421 10:11:57.220489 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.220633 kubelet[2557]: I0421 10:11:57.220561 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.220633 kubelet[2557]: I0421 10:11:57.220585 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.220906 kubelet[2557]: I0421 10:11:57.220611 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9285ce562c009e20953c923b3b29b66e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-7-16e5f88171\" (UID: \"9285ce562c009e20953c923b3b29b66e\") " pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.220906 kubelet[2557]: I0421 10:11:57.220692 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.220906 kubelet[2557]: I0421 10:11:57.220749 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c9829fcb09461e901f9992aa024e97c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-7-16e5f88171\" (UID: \"c9829fcb09461e901f9992aa024e97c8\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.906265 kubelet[2557]: I0421 10:11:57.905370 2557 apiserver.go:52] "Watching apiserver" Apr 21 10:11:57.921880 kubelet[2557]: I0421 10:11:57.921843 2557 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 10:11:57.938058 kubelet[2557]: I0421 10:11:57.937919 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" podStartSLOduration=2.937893046 podStartE2EDuration="2.937893046s" podCreationTimestamp="2026-04-21 10:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:11:57.937078074 +0000 UTC m=+1.096757344" watchObservedRunningTime="2026-04-21 10:11:57.937893046 +0000 UTC m=+1.097572306" Apr 21 10:11:57.952707 kubelet[2557]: I0421 10:11:57.952658 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-7-16e5f88171" podStartSLOduration=0.952646222 podStartE2EDuration="952.646222ms" podCreationTimestamp="2026-04-21 10:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:11:57.951489607 +0000 UTC m=+1.111168867" watchObservedRunningTime="2026-04-21 10:11:57.952646222 +0000 UTC m=+1.112325492" Apr 21 10:11:57.952707 kubelet[2557]: I0421 10:11:57.952712 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" podStartSLOduration=2.952709778 podStartE2EDuration="2.952709778s" podCreationTimestamp="2026-04-21 10:11:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:11:57.946174652 +0000 UTC m=+1.105853912" watchObservedRunningTime="2026-04-21 10:11:57.952709778 +0000 UTC m=+1.112389048" Apr 21 10:11:57.956631 kubelet[2557]: I0421 10:11:57.954884 2557 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.956631 kubelet[2557]: I0421 10:11:57.955333 2557 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.959795 kubelet[2557]: E0421 10:11:57.959771 2557 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-7-16e5f88171\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-7-16e5f88171" Apr 21 10:11:57.960844 kubelet[2557]: E0421 10:11:57.960705 2557 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-7-16e5f88171\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-7-16e5f88171" Apr 21 10:12:02.947669 kubelet[2557]: I0421 10:12:02.947580 2557 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:12:02.948537 kubelet[2557]: I0421 10:12:02.948002 2557 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:12:02.948709 containerd[1501]: time="2026-04-21T10:12:02.947866183Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:12:04.102279 systemd[1]: Created slice kubepods-besteffort-pod2724a6b6_9cc2_4251_ae82_8549a4730125.slice - libcontainer container kubepods-besteffort-pod2724a6b6_9cc2_4251_ae82_8549a4730125.slice. Apr 21 10:12:04.169344 kubelet[2557]: I0421 10:12:04.168308 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2724a6b6-9cc2-4251-ae82-8549a4730125-kube-proxy\") pod \"kube-proxy-d29c9\" (UID: \"2724a6b6-9cc2-4251-ae82-8549a4730125\") " pod="kube-system/kube-proxy-d29c9" Apr 21 10:12:04.169344 kubelet[2557]: I0421 10:12:04.168384 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2724a6b6-9cc2-4251-ae82-8549a4730125-lib-modules\") pod \"kube-proxy-d29c9\" (UID: \"2724a6b6-9cc2-4251-ae82-8549a4730125\") " pod="kube-system/kube-proxy-d29c9" Apr 21 10:12:04.169344 kubelet[2557]: I0421 10:12:04.168415 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2724a6b6-9cc2-4251-ae82-8549a4730125-xtables-lock\") pod \"kube-proxy-d29c9\" (UID: \"2724a6b6-9cc2-4251-ae82-8549a4730125\") " pod="kube-system/kube-proxy-d29c9" Apr 21 10:12:04.169344 kubelet[2557]: I0421 10:12:04.168440 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vptw\" (UniqueName: \"kubernetes.io/projected/2724a6b6-9cc2-4251-ae82-8549a4730125-kube-api-access-5vptw\") pod \"kube-proxy-d29c9\" (UID: \"2724a6b6-9cc2-4251-ae82-8549a4730125\") " pod="kube-system/kube-proxy-d29c9" Apr 21 10:12:04.201225 systemd[1]: Created slice kubepods-besteffort-podc3134158_eb81_415b_a377_54dd8df0b47b.slice - libcontainer container kubepods-besteffort-podc3134158_eb81_415b_a377_54dd8df0b47b.slice. Apr 21 10:12:04.269733 kubelet[2557]: I0421 10:12:04.269670 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lsjd\" (UniqueName: \"kubernetes.io/projected/c3134158-eb81-415b-a377-54dd8df0b47b-kube-api-access-2lsjd\") pod \"tigera-operator-6cf4cccc57-qvglx\" (UID: \"c3134158-eb81-415b-a377-54dd8df0b47b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-qvglx" Apr 21 10:12:04.269903 kubelet[2557]: I0421 10:12:04.269745 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3134158-eb81-415b-a377-54dd8df0b47b-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-qvglx\" (UID: \"c3134158-eb81-415b-a377-54dd8df0b47b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-qvglx" Apr 21 10:12:04.417010 containerd[1501]: time="2026-04-21T10:12:04.416196720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d29c9,Uid:2724a6b6-9cc2-4251-ae82-8549a4730125,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:04.460734 containerd[1501]: time="2026-04-21T10:12:04.459893263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:04.460734 containerd[1501]: time="2026-04-21T10:12:04.459974075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:04.460734 containerd[1501]: time="2026-04-21T10:12:04.459993824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:04.460734 containerd[1501]: time="2026-04-21T10:12:04.460176228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:04.487742 systemd[1]: Started cri-containerd-9d89b2414486e33d3f8a4521f41c80a076639a7ad4e447bca3acb2976a6364f8.scope - libcontainer container 9d89b2414486e33d3f8a4521f41c80a076639a7ad4e447bca3acb2976a6364f8. Apr 21 10:12:04.511041 containerd[1501]: time="2026-04-21T10:12:04.510768077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-qvglx,Uid:c3134158-eb81-415b-a377-54dd8df0b47b,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:12:04.517190 containerd[1501]: time="2026-04-21T10:12:04.517120839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d29c9,Uid:2724a6b6-9cc2-4251-ae82-8549a4730125,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d89b2414486e33d3f8a4521f41c80a076639a7ad4e447bca3acb2976a6364f8\"" Apr 21 10:12:04.522714 containerd[1501]: time="2026-04-21T10:12:04.522669514Z" level=info msg="CreateContainer within sandbox \"9d89b2414486e33d3f8a4521f41c80a076639a7ad4e447bca3acb2976a6364f8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:12:04.538882 containerd[1501]: time="2026-04-21T10:12:04.538845274Z" level=info msg="CreateContainer within sandbox \"9d89b2414486e33d3f8a4521f41c80a076639a7ad4e447bca3acb2976a6364f8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9041fc5dbc4db8eb6140d9474cbc8ad5f19358e6e8f112408acba7a95fd07a32\"" Apr 21 10:12:04.541100 containerd[1501]: time="2026-04-21T10:12:04.539052585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:04.541100 containerd[1501]: time="2026-04-21T10:12:04.539848399Z" level=info msg="StartContainer for \"9041fc5dbc4db8eb6140d9474cbc8ad5f19358e6e8f112408acba7a95fd07a32\"" Apr 21 10:12:04.543168 containerd[1501]: time="2026-04-21T10:12:04.541422301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:04.543168 containerd[1501]: time="2026-04-21T10:12:04.541442151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:04.543168 containerd[1501]: time="2026-04-21T10:12:04.541517223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:04.560810 systemd[1]: Started cri-containerd-6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb.scope - libcontainer container 6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb. Apr 21 10:12:04.564418 systemd[1]: Started cri-containerd-9041fc5dbc4db8eb6140d9474cbc8ad5f19358e6e8f112408acba7a95fd07a32.scope - libcontainer container 9041fc5dbc4db8eb6140d9474cbc8ad5f19358e6e8f112408acba7a95fd07a32. Apr 21 10:12:04.601093 containerd[1501]: time="2026-04-21T10:12:04.600909506Z" level=info msg="StartContainer for \"9041fc5dbc4db8eb6140d9474cbc8ad5f19358e6e8f112408acba7a95fd07a32\" returns successfully" Apr 21 10:12:04.608812 containerd[1501]: time="2026-04-21T10:12:04.608677772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-qvglx,Uid:c3134158-eb81-415b-a377-54dd8df0b47b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb\"" Apr 21 10:12:04.615817 containerd[1501]: time="2026-04-21T10:12:04.615786158Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:12:06.466015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1300605527.mount: Deactivated successfully. Apr 21 10:12:07.752098 containerd[1501]: time="2026-04-21T10:12:07.752045603Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:07.752987 containerd[1501]: time="2026-04-21T10:12:07.752836350Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:12:07.753983 containerd[1501]: time="2026-04-21T10:12:07.753953556Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:07.755772 containerd[1501]: time="2026-04-21T10:12:07.755742210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:07.756509 containerd[1501]: time="2026-04-21T10:12:07.756397393Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.140579107s" Apr 21 10:12:07.756509 containerd[1501]: time="2026-04-21T10:12:07.756431394Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:12:07.760456 containerd[1501]: time="2026-04-21T10:12:07.760405507Z" level=info msg="CreateContainer within sandbox \"6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:12:07.774109 containerd[1501]: time="2026-04-21T10:12:07.774066814Z" level=info msg="CreateContainer within sandbox \"6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b\"" Apr 21 10:12:07.774632 containerd[1501]: time="2026-04-21T10:12:07.774588177Z" level=info msg="StartContainer for \"22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b\"" Apr 21 10:12:07.800523 systemd[1]: run-containerd-runc-k8s.io-22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b-runc.nElUiA.mount: Deactivated successfully. Apr 21 10:12:07.806776 systemd[1]: Started cri-containerd-22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b.scope - libcontainer container 22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b. Apr 21 10:12:07.831367 containerd[1501]: time="2026-04-21T10:12:07.831316272Z" level=info msg="StartContainer for \"22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b\" returns successfully" Apr 21 10:12:07.936359 kubelet[2557]: I0421 10:12:07.935590 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-d29c9" podStartSLOduration=3.935571168 podStartE2EDuration="3.935571168s" podCreationTimestamp="2026-04-21 10:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:04.981905429 +0000 UTC m=+8.141584689" watchObservedRunningTime="2026-04-21 10:12:07.935571168 +0000 UTC m=+11.095250458" Apr 21 10:12:07.989189 kubelet[2557]: I0421 10:12:07.988960 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-qvglx" podStartSLOduration=0.84494355 podStartE2EDuration="3.988948156s" podCreationTimestamp="2026-04-21 10:12:04 +0000 UTC" firstStartedPulling="2026-04-21 10:12:04.613229582 +0000 UTC m=+7.772908842" lastFinishedPulling="2026-04-21 10:12:07.757234178 +0000 UTC m=+10.916913448" observedRunningTime="2026-04-21 10:12:07.987169126 +0000 UTC m=+11.146848387" watchObservedRunningTime="2026-04-21 10:12:07.988948156 +0000 UTC m=+11.148627416" Apr 21 10:12:13.056168 sudo[1694]: pam_unix(sudo:session): session closed for user root Apr 21 10:12:13.087847 sshd[1691]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:13.093029 systemd[1]: sshd@6-37.27.23.25:22-50.85.169.122:40066.service: Deactivated successfully. Apr 21 10:12:13.095316 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:12:13.095921 systemd[1]: session-7.scope: Consumed 2.789s CPU time, 157.4M memory peak, 0B memory swap peak. Apr 21 10:12:13.098202 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:12:13.101167 systemd-logind[1486]: Removed session 7. Apr 21 10:12:14.869647 systemd[1]: Created slice kubepods-besteffort-pod4b5aa4ad_c90c_40bf_89cd_15a06fded885.slice - libcontainer container kubepods-besteffort-pod4b5aa4ad_c90c_40bf_89cd_15a06fded885.slice. Apr 21 10:12:14.935938 systemd[1]: Created slice kubepods-besteffort-pod069d7256_72ef_4975_849d_ecc750c0ed28.slice - libcontainer container kubepods-besteffort-pod069d7256_72ef_4975_849d_ecc750c0ed28.slice. Apr 21 10:12:14.938934 kubelet[2557]: I0421 10:12:14.938903 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-bpffs\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.938934 kubelet[2557]: I0421 10:12:14.938931 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-flexvol-driver-host\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939390 kubelet[2557]: I0421 10:12:14.938943 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-lib-modules\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939390 kubelet[2557]: I0421 10:12:14.938956 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/069d7256-72ef-4975-849d-ecc750c0ed28-node-certs\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939390 kubelet[2557]: I0421 10:12:14.938966 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-sys-fs\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939390 kubelet[2557]: I0421 10:12:14.938976 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-var-lib-calico\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939390 kubelet[2557]: I0421 10:12:14.938987 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-var-run-calico\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939515 kubelet[2557]: I0421 10:12:14.938998 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b5aa4ad-c90c-40bf-89cd-15a06fded885-tigera-ca-bundle\") pod \"calico-typha-6ff95c8874-sl66k\" (UID: \"4b5aa4ad-c90c-40bf-89cd-15a06fded885\") " pod="calico-system/calico-typha-6ff95c8874-sl66k" Apr 21 10:12:14.939515 kubelet[2557]: I0421 10:12:14.939010 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rb7\" (UniqueName: \"kubernetes.io/projected/4b5aa4ad-c90c-40bf-89cd-15a06fded885-kube-api-access-22rb7\") pod \"calico-typha-6ff95c8874-sl66k\" (UID: \"4b5aa4ad-c90c-40bf-89cd-15a06fded885\") " pod="calico-system/calico-typha-6ff95c8874-sl66k" Apr 21 10:12:14.939515 kubelet[2557]: I0421 10:12:14.939020 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-cni-log-dir\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939515 kubelet[2557]: I0421 10:12:14.939029 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxffk\" (UniqueName: \"kubernetes.io/projected/069d7256-72ef-4975-849d-ecc750c0ed28-kube-api-access-vxffk\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939515 kubelet[2557]: I0421 10:12:14.939053 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-policysync\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939678 kubelet[2557]: I0421 10:12:14.939065 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-cni-bin-dir\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939678 kubelet[2557]: I0421 10:12:14.939075 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-cni-net-dir\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939678 kubelet[2557]: I0421 10:12:14.939084 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-xtables-lock\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939678 kubelet[2557]: I0421 10:12:14.939097 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4b5aa4ad-c90c-40bf-89cd-15a06fded885-typha-certs\") pod \"calico-typha-6ff95c8874-sl66k\" (UID: \"4b5aa4ad-c90c-40bf-89cd-15a06fded885\") " pod="calico-system/calico-typha-6ff95c8874-sl66k" Apr 21 10:12:14.939678 kubelet[2557]: I0421 10:12:14.939109 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/069d7256-72ef-4975-849d-ecc750c0ed28-nodeproc\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:14.939800 kubelet[2557]: I0421 10:12:14.939119 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/069d7256-72ef-4975-849d-ecc750c0ed28-tigera-ca-bundle\") pod \"calico-node-k87nd\" (UID: \"069d7256-72ef-4975-849d-ecc750c0ed28\") " pod="calico-system/calico-node-k87nd" Apr 21 10:12:15.059569 kubelet[2557]: E0421 10:12:15.059505 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.059569 kubelet[2557]: W0421 10:12:15.059526 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.059569 kubelet[2557]: E0421 10:12:15.059548 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.061225 kubelet[2557]: E0421 10:12:15.061182 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.061225 kubelet[2557]: W0421 10:12:15.061196 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.061225 kubelet[2557]: E0421 10:12:15.061209 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.063450 kubelet[2557]: E0421 10:12:15.063420 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:15.077636 kubelet[2557]: E0421 10:12:15.077076 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.077636 kubelet[2557]: W0421 10:12:15.077093 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.077636 kubelet[2557]: E0421 10:12:15.077111 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.081172 kubelet[2557]: E0421 10:12:15.081153 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.081172 kubelet[2557]: W0421 10:12:15.081168 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.081253 kubelet[2557]: E0421 10:12:15.081184 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.128966 kubelet[2557]: E0421 10:12:15.128851 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.128966 kubelet[2557]: W0421 10:12:15.128877 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.128966 kubelet[2557]: E0421 10:12:15.128903 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.129805 kubelet[2557]: E0421 10:12:15.129768 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.129997 kubelet[2557]: W0421 10:12:15.129900 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.129997 kubelet[2557]: E0421 10:12:15.129919 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.130433 kubelet[2557]: E0421 10:12:15.130315 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.130433 kubelet[2557]: W0421 10:12:15.130324 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.130433 kubelet[2557]: E0421 10:12:15.130334 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.130587 kubelet[2557]: E0421 10:12:15.130580 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.130759 kubelet[2557]: W0421 10:12:15.130685 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.130759 kubelet[2557]: E0421 10:12:15.130697 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.131070 kubelet[2557]: E0421 10:12:15.131018 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.131070 kubelet[2557]: W0421 10:12:15.131027 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.131070 kubelet[2557]: E0421 10:12:15.131035 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.131453 kubelet[2557]: E0421 10:12:15.131384 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.131453 kubelet[2557]: W0421 10:12:15.131392 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.131453 kubelet[2557]: E0421 10:12:15.131399 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.131790 kubelet[2557]: E0421 10:12:15.131720 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.131790 kubelet[2557]: W0421 10:12:15.131728 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.131790 kubelet[2557]: E0421 10:12:15.131736 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.132120 kubelet[2557]: E0421 10:12:15.132035 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.132120 kubelet[2557]: W0421 10:12:15.132043 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.132120 kubelet[2557]: E0421 10:12:15.132050 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.132333 kubelet[2557]: E0421 10:12:15.132241 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.132333 kubelet[2557]: W0421 10:12:15.132250 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.132333 kubelet[2557]: E0421 10:12:15.132270 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.132540 kubelet[2557]: E0421 10:12:15.132457 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.132540 kubelet[2557]: W0421 10:12:15.132464 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.132540 kubelet[2557]: E0421 10:12:15.132470 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.132699 kubelet[2557]: E0421 10:12:15.132691 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.132817 kubelet[2557]: W0421 10:12:15.132773 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.132817 kubelet[2557]: E0421 10:12:15.132783 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.133205 kubelet[2557]: E0421 10:12:15.133112 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.133205 kubelet[2557]: W0421 10:12:15.133120 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.133205 kubelet[2557]: E0421 10:12:15.133134 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.133343 kubelet[2557]: E0421 10:12:15.133335 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.133436 kubelet[2557]: W0421 10:12:15.133396 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.133436 kubelet[2557]: E0421 10:12:15.133405 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.133726 kubelet[2557]: E0421 10:12:15.133652 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.133726 kubelet[2557]: W0421 10:12:15.133660 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.133726 kubelet[2557]: E0421 10:12:15.133666 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.133853 kubelet[2557]: E0421 10:12:15.133846 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.133928 kubelet[2557]: W0421 10:12:15.133885 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.133928 kubelet[2557]: E0421 10:12:15.133893 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.134286 kubelet[2557]: E0421 10:12:15.134174 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.134286 kubelet[2557]: W0421 10:12:15.134193 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.134286 kubelet[2557]: E0421 10:12:15.134200 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.134428 kubelet[2557]: E0421 10:12:15.134421 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.134534 kubelet[2557]: W0421 10:12:15.134462 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.134534 kubelet[2557]: E0421 10:12:15.134471 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.134715 kubelet[2557]: E0421 10:12:15.134707 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.134784 kubelet[2557]: W0421 10:12:15.134743 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.134784 kubelet[2557]: E0421 10:12:15.134751 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.135078 kubelet[2557]: E0421 10:12:15.134992 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.135078 kubelet[2557]: W0421 10:12:15.134999 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.135078 kubelet[2557]: E0421 10:12:15.135005 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.135208 kubelet[2557]: E0421 10:12:15.135201 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.135241 kubelet[2557]: W0421 10:12:15.135234 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.135332 kubelet[2557]: E0421 10:12:15.135276 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.140642 kubelet[2557]: E0421 10:12:15.140632 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.140712 kubelet[2557]: W0421 10:12:15.140690 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.140712 kubelet[2557]: E0421 10:12:15.140703 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.140812 kubelet[2557]: I0421 10:12:15.140722 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/27b58c18-a535-4675-97b1-656bf0345381-registration-dir\") pod \"csi-node-driver-kmvbc\" (UID: \"27b58c18-a535-4675-97b1-656bf0345381\") " pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:15.140962 kubelet[2557]: E0421 10:12:15.140943 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.141002 kubelet[2557]: W0421 10:12:15.140958 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.141002 kubelet[2557]: E0421 10:12:15.140981 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.141041 kubelet[2557]: I0421 10:12:15.141002 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/27b58c18-a535-4675-97b1-656bf0345381-socket-dir\") pod \"csi-node-driver-kmvbc\" (UID: \"27b58c18-a535-4675-97b1-656bf0345381\") " pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:15.141285 kubelet[2557]: E0421 10:12:15.141254 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.141285 kubelet[2557]: W0421 10:12:15.141284 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.141342 kubelet[2557]: E0421 10:12:15.141293 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.141342 kubelet[2557]: I0421 10:12:15.141310 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27b58c18-a535-4675-97b1-656bf0345381-kubelet-dir\") pod \"csi-node-driver-kmvbc\" (UID: \"27b58c18-a535-4675-97b1-656bf0345381\") " pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:15.141672 kubelet[2557]: E0421 10:12:15.141585 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.141672 kubelet[2557]: W0421 10:12:15.141634 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.141672 kubelet[2557]: E0421 10:12:15.141643 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.141672 kubelet[2557]: I0421 10:12:15.141661 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzd5w\" (UniqueName: \"kubernetes.io/projected/27b58c18-a535-4675-97b1-656bf0345381-kube-api-access-lzd5w\") pod \"csi-node-driver-kmvbc\" (UID: \"27b58c18-a535-4675-97b1-656bf0345381\") " pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:15.142433 kubelet[2557]: E0421 10:12:15.142422 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.142581 kubelet[2557]: W0421 10:12:15.142459 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.142581 kubelet[2557]: E0421 10:12:15.142469 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.142724 kubelet[2557]: I0421 10:12:15.142693 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/27b58c18-a535-4675-97b1-656bf0345381-varrun\") pod \"csi-node-driver-kmvbc\" (UID: \"27b58c18-a535-4675-97b1-656bf0345381\") " pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:15.142851 kubelet[2557]: E0421 10:12:15.142828 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.142851 kubelet[2557]: W0421 10:12:15.142835 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.142851 kubelet[2557]: E0421 10:12:15.142842 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.143205 kubelet[2557]: E0421 10:12:15.143182 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.143205 kubelet[2557]: W0421 10:12:15.143189 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.143205 kubelet[2557]: E0421 10:12:15.143196 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.143638 kubelet[2557]: E0421 10:12:15.143578 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.143638 kubelet[2557]: W0421 10:12:15.143588 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.143638 kubelet[2557]: E0421 10:12:15.143596 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.144055 kubelet[2557]: E0421 10:12:15.143952 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.144055 kubelet[2557]: W0421 10:12:15.143960 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.144055 kubelet[2557]: E0421 10:12:15.143967 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.144353 kubelet[2557]: E0421 10:12:15.144270 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.144353 kubelet[2557]: W0421 10:12:15.144277 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.144353 kubelet[2557]: E0421 10:12:15.144284 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.144638 kubelet[2557]: E0421 10:12:15.144571 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.144638 kubelet[2557]: W0421 10:12:15.144579 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.144638 kubelet[2557]: E0421 10:12:15.144587 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.145025 kubelet[2557]: E0421 10:12:15.144918 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.145025 kubelet[2557]: W0421 10:12:15.144927 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.145025 kubelet[2557]: E0421 10:12:15.144936 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.145332 kubelet[2557]: E0421 10:12:15.145224 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.145332 kubelet[2557]: W0421 10:12:15.145232 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.145332 kubelet[2557]: E0421 10:12:15.145239 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.145482 kubelet[2557]: E0421 10:12:15.145462 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.145482 kubelet[2557]: W0421 10:12:15.145474 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.145482 kubelet[2557]: E0421 10:12:15.145480 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.145765 kubelet[2557]: E0421 10:12:15.145724 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.145765 kubelet[2557]: W0421 10:12:15.145732 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.145765 kubelet[2557]: E0421 10:12:15.145739 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.174207 containerd[1501]: time="2026-04-21T10:12:15.174162276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff95c8874-sl66k,Uid:4b5aa4ad-c90c-40bf-89cd-15a06fded885,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:15.195752 containerd[1501]: time="2026-04-21T10:12:15.195293748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:15.195752 containerd[1501]: time="2026-04-21T10:12:15.195337333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:15.195752 containerd[1501]: time="2026-04-21T10:12:15.195411616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:15.195752 containerd[1501]: time="2026-04-21T10:12:15.195476755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:15.216805 systemd[1]: Started cri-containerd-61922d3924d17d5dbe3d2fca27d7cc7704201ca86305c5d9a2bdad25023b1115.scope - libcontainer container 61922d3924d17d5dbe3d2fca27d7cc7704201ca86305c5d9a2bdad25023b1115. Apr 21 10:12:15.241134 containerd[1501]: time="2026-04-21T10:12:15.240856594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k87nd,Uid:069d7256-72ef-4975-849d-ecc750c0ed28,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:15.244142 kubelet[2557]: E0421 10:12:15.244119 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.244389 kubelet[2557]: W0421 10:12:15.244373 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.244557 kubelet[2557]: E0421 10:12:15.244463 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.245175 kubelet[2557]: E0421 10:12:15.244969 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.245175 kubelet[2557]: W0421 10:12:15.244978 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.245175 kubelet[2557]: E0421 10:12:15.244986 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.245386 kubelet[2557]: E0421 10:12:15.245377 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.246072 kubelet[2557]: W0421 10:12:15.246051 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.246072 kubelet[2557]: E0421 10:12:15.246067 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.246400 kubelet[2557]: E0421 10:12:15.246384 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.246400 kubelet[2557]: W0421 10:12:15.246399 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.246560 kubelet[2557]: E0421 10:12:15.246407 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.247329 kubelet[2557]: E0421 10:12:15.246799 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.247329 kubelet[2557]: W0421 10:12:15.246808 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.247329 kubelet[2557]: E0421 10:12:15.246815 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.247401 kubelet[2557]: E0421 10:12:15.247358 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.247401 kubelet[2557]: W0421 10:12:15.247366 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.247401 kubelet[2557]: E0421 10:12:15.247374 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.247886 kubelet[2557]: E0421 10:12:15.247808 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.247886 kubelet[2557]: W0421 10:12:15.247874 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.247886 kubelet[2557]: E0421 10:12:15.247882 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.248342 kubelet[2557]: E0421 10:12:15.248327 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.248342 kubelet[2557]: W0421 10:12:15.248339 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.248388 kubelet[2557]: E0421 10:12:15.248346 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.248690 kubelet[2557]: E0421 10:12:15.248676 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.248690 kubelet[2557]: W0421 10:12:15.248688 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.248782 kubelet[2557]: E0421 10:12:15.248695 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.249050 kubelet[2557]: E0421 10:12:15.248977 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.249050 kubelet[2557]: W0421 10:12:15.248985 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.249050 kubelet[2557]: E0421 10:12:15.248992 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.249303 kubelet[2557]: E0421 10:12:15.249231 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.249303 kubelet[2557]: W0421 10:12:15.249240 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.249303 kubelet[2557]: E0421 10:12:15.249273 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.249594 kubelet[2557]: E0421 10:12:15.249494 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.249594 kubelet[2557]: W0421 10:12:15.249502 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.249594 kubelet[2557]: E0421 10:12:15.249508 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.249894 kubelet[2557]: E0421 10:12:15.249759 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.249894 kubelet[2557]: W0421 10:12:15.249768 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.249894 kubelet[2557]: E0421 10:12:15.249774 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.250096 kubelet[2557]: E0421 10:12:15.250083 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.250096 kubelet[2557]: W0421 10:12:15.250093 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.250159 kubelet[2557]: E0421 10:12:15.250100 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.250810 kubelet[2557]: E0421 10:12:15.250497 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.250810 kubelet[2557]: W0421 10:12:15.250505 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.250810 kubelet[2557]: E0421 10:12:15.250512 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.250912 kubelet[2557]: E0421 10:12:15.250856 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.250912 kubelet[2557]: W0421 10:12:15.250909 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.250946 kubelet[2557]: E0421 10:12:15.250917 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.251150 kubelet[2557]: E0421 10:12:15.251138 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.251150 kubelet[2557]: W0421 10:12:15.251147 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.251189 kubelet[2557]: E0421 10:12:15.251154 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.251435 kubelet[2557]: E0421 10:12:15.251414 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.251435 kubelet[2557]: W0421 10:12:15.251424 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.251435 kubelet[2557]: E0421 10:12:15.251432 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.251937 kubelet[2557]: E0421 10:12:15.251851 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.251937 kubelet[2557]: W0421 10:12:15.251862 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.251937 kubelet[2557]: E0421 10:12:15.251871 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.253329 kubelet[2557]: E0421 10:12:15.253086 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.253329 kubelet[2557]: W0421 10:12:15.253108 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.253329 kubelet[2557]: E0421 10:12:15.253116 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.253420 kubelet[2557]: E0421 10:12:15.253373 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.253420 kubelet[2557]: W0421 10:12:15.253380 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.253420 kubelet[2557]: E0421 10:12:15.253388 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.253779 kubelet[2557]: E0421 10:12:15.253633 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.253779 kubelet[2557]: W0421 10:12:15.253639 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.253779 kubelet[2557]: E0421 10:12:15.253646 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.254103 kubelet[2557]: E0421 10:12:15.253955 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.254103 kubelet[2557]: W0421 10:12:15.253963 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.254103 kubelet[2557]: E0421 10:12:15.253971 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.254350 kubelet[2557]: E0421 10:12:15.254221 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.254350 kubelet[2557]: W0421 10:12:15.254228 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.254350 kubelet[2557]: E0421 10:12:15.254234 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.255894 kubelet[2557]: E0421 10:12:15.254486 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.255894 kubelet[2557]: W0421 10:12:15.254493 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.255894 kubelet[2557]: E0421 10:12:15.254499 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.259876 kubelet[2557]: E0421 10:12:15.259864 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:15.259973 kubelet[2557]: W0421 10:12:15.259922 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:15.259973 kubelet[2557]: E0421 10:12:15.259933 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:15.263835 containerd[1501]: time="2026-04-21T10:12:15.263811110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6ff95c8874-sl66k,Uid:4b5aa4ad-c90c-40bf-89cd-15a06fded885,Namespace:calico-system,Attempt:0,} returns sandbox id \"61922d3924d17d5dbe3d2fca27d7cc7704201ca86305c5d9a2bdad25023b1115\"" Apr 21 10:12:15.265419 containerd[1501]: time="2026-04-21T10:12:15.265388001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:12:15.272983 containerd[1501]: time="2026-04-21T10:12:15.272713578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:15.272983 containerd[1501]: time="2026-04-21T10:12:15.272767360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:15.272983 containerd[1501]: time="2026-04-21T10:12:15.272777876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:15.272983 containerd[1501]: time="2026-04-21T10:12:15.272845349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:15.287757 systemd[1]: Started cri-containerd-ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7.scope - libcontainer container ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7. Apr 21 10:12:15.307916 containerd[1501]: time="2026-04-21T10:12:15.307875854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k87nd,Uid:069d7256-72ef-4975-849d-ecc750c0ed28,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\"" Apr 21 10:12:16.305315 update_engine[1487]: I20260421 10:12:16.305204 1487 update_attempter.cc:509] Updating boot flags... Apr 21 10:12:16.369678 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3124) Apr 21 10:12:16.422990 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3125) Apr 21 10:12:16.927525 kubelet[2557]: E0421 10:12:16.927171 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:17.086173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4029567557.mount: Deactivated successfully. Apr 21 10:12:17.430881 containerd[1501]: time="2026-04-21T10:12:17.430819204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:17.431800 containerd[1501]: time="2026-04-21T10:12:17.431769052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:12:17.432578 containerd[1501]: time="2026-04-21T10:12:17.432548152Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:17.434053 containerd[1501]: time="2026-04-21T10:12:17.434020893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:17.434594 containerd[1501]: time="2026-04-21T10:12:17.434444985Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.169032046s" Apr 21 10:12:17.434594 containerd[1501]: time="2026-04-21T10:12:17.434467679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:12:17.437701 containerd[1501]: time="2026-04-21T10:12:17.437683479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:12:17.451757 containerd[1501]: time="2026-04-21T10:12:17.451719575Z" level=info msg="CreateContainer within sandbox \"61922d3924d17d5dbe3d2fca27d7cc7704201ca86305c5d9a2bdad25023b1115\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:12:17.462724 containerd[1501]: time="2026-04-21T10:12:17.462695777Z" level=info msg="CreateContainer within sandbox \"61922d3924d17d5dbe3d2fca27d7cc7704201ca86305c5d9a2bdad25023b1115\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"441f52ddbef4f3c9cca8d22fce976cc416e706d63da07112b5bf986c6ac898ab\"" Apr 21 10:12:17.463547 containerd[1501]: time="2026-04-21T10:12:17.463135062Z" level=info msg="StartContainer for \"441f52ddbef4f3c9cca8d22fce976cc416e706d63da07112b5bf986c6ac898ab\"" Apr 21 10:12:17.487756 systemd[1]: Started cri-containerd-441f52ddbef4f3c9cca8d22fce976cc416e706d63da07112b5bf986c6ac898ab.scope - libcontainer container 441f52ddbef4f3c9cca8d22fce976cc416e706d63da07112b5bf986c6ac898ab. Apr 21 10:12:17.522438 containerd[1501]: time="2026-04-21T10:12:17.522400872Z" level=info msg="StartContainer for \"441f52ddbef4f3c9cca8d22fce976cc416e706d63da07112b5bf986c6ac898ab\" returns successfully" Apr 21 10:12:18.025549 kubelet[2557]: I0421 10:12:18.024540 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6ff95c8874-sl66k" podStartSLOduration=1.852745254 podStartE2EDuration="4.024519674s" podCreationTimestamp="2026-04-21 10:12:14 +0000 UTC" firstStartedPulling="2026-04-21 10:12:15.265063729 +0000 UTC m=+18.424742999" lastFinishedPulling="2026-04-21 10:12:17.436838149 +0000 UTC m=+20.596517419" observedRunningTime="2026-04-21 10:12:18.024340162 +0000 UTC m=+21.184019472" watchObservedRunningTime="2026-04-21 10:12:18.024519674 +0000 UTC m=+21.184198974" Apr 21 10:12:18.052679 kubelet[2557]: E0421 10:12:18.052590 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.052679 kubelet[2557]: W0421 10:12:18.052655 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.052679 kubelet[2557]: E0421 10:12:18.052689 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.053269 kubelet[2557]: E0421 10:12:18.053236 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.053269 kubelet[2557]: W0421 10:12:18.053256 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.053422 kubelet[2557]: E0421 10:12:18.053274 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.053844 kubelet[2557]: E0421 10:12:18.053805 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.053844 kubelet[2557]: W0421 10:12:18.053828 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.053938 kubelet[2557]: E0421 10:12:18.053847 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.054453 kubelet[2557]: E0421 10:12:18.054433 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.054453 kubelet[2557]: W0421 10:12:18.054451 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.054587 kubelet[2557]: E0421 10:12:18.054468 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.054994 kubelet[2557]: E0421 10:12:18.054954 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.054994 kubelet[2557]: W0421 10:12:18.054976 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.054994 kubelet[2557]: E0421 10:12:18.054991 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.055449 kubelet[2557]: E0421 10:12:18.055428 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.055449 kubelet[2557]: W0421 10:12:18.055448 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.055573 kubelet[2557]: E0421 10:12:18.055464 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.055991 kubelet[2557]: E0421 10:12:18.055927 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.055991 kubelet[2557]: W0421 10:12:18.055953 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.055991 kubelet[2557]: E0421 10:12:18.055976 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.056481 kubelet[2557]: E0421 10:12:18.056444 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.056481 kubelet[2557]: W0421 10:12:18.056468 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.056693 kubelet[2557]: E0421 10:12:18.056490 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.057008 kubelet[2557]: E0421 10:12:18.056979 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.057083 kubelet[2557]: W0421 10:12:18.057007 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.057083 kubelet[2557]: E0421 10:12:18.057028 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.057666 kubelet[2557]: E0421 10:12:18.057499 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.057666 kubelet[2557]: W0421 10:12:18.057519 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.057666 kubelet[2557]: E0421 10:12:18.057537 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.060108 kubelet[2557]: E0421 10:12:18.060076 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.060362 kubelet[2557]: W0421 10:12:18.060219 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.060362 kubelet[2557]: E0421 10:12:18.060247 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.061049 kubelet[2557]: E0421 10:12:18.060918 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.061049 kubelet[2557]: W0421 10:12:18.060938 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.061049 kubelet[2557]: E0421 10:12:18.060955 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.063837 kubelet[2557]: E0421 10:12:18.063432 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.063837 kubelet[2557]: W0421 10:12:18.063453 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.063837 kubelet[2557]: E0421 10:12:18.063470 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.064034 kubelet[2557]: E0421 10:12:18.064001 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.064034 kubelet[2557]: W0421 10:12:18.064018 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.064103 kubelet[2557]: E0421 10:12:18.064039 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.064550 kubelet[2557]: E0421 10:12:18.064513 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.064550 kubelet[2557]: W0421 10:12:18.064541 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.064711 kubelet[2557]: E0421 10:12:18.064558 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.067661 kubelet[2557]: E0421 10:12:18.066157 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.067661 kubelet[2557]: W0421 10:12:18.066179 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.067661 kubelet[2557]: E0421 10:12:18.066199 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.068766 kubelet[2557]: E0421 10:12:18.068719 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.068766 kubelet[2557]: W0421 10:12:18.068746 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.068766 kubelet[2557]: E0421 10:12:18.068764 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.069557 kubelet[2557]: E0421 10:12:18.069435 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.069557 kubelet[2557]: W0421 10:12:18.069458 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.069557 kubelet[2557]: E0421 10:12:18.069474 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.070129 kubelet[2557]: E0421 10:12:18.070093 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.070129 kubelet[2557]: W0421 10:12:18.070116 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.070230 kubelet[2557]: E0421 10:12:18.070134 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.070777 kubelet[2557]: E0421 10:12:18.070738 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.070777 kubelet[2557]: W0421 10:12:18.070766 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.070873 kubelet[2557]: E0421 10:12:18.070784 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.071257 kubelet[2557]: E0421 10:12:18.071222 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.071333 kubelet[2557]: W0421 10:12:18.071300 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.071333 kubelet[2557]: E0421 10:12:18.071317 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.071950 kubelet[2557]: E0421 10:12:18.071914 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.071950 kubelet[2557]: W0421 10:12:18.071936 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.072053 kubelet[2557]: E0421 10:12:18.071951 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.072469 kubelet[2557]: E0421 10:12:18.072433 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.072469 kubelet[2557]: W0421 10:12:18.072456 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.072558 kubelet[2557]: E0421 10:12:18.072471 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.072998 kubelet[2557]: E0421 10:12:18.072965 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.072998 kubelet[2557]: W0421 10:12:18.072986 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.073093 kubelet[2557]: E0421 10:12:18.073004 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.073516 kubelet[2557]: E0421 10:12:18.073473 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.073567 kubelet[2557]: W0421 10:12:18.073508 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.073567 kubelet[2557]: E0421 10:12:18.073531 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.074080 kubelet[2557]: E0421 10:12:18.074045 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.074080 kubelet[2557]: W0421 10:12:18.074071 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.074173 kubelet[2557]: E0421 10:12:18.074086 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.074594 kubelet[2557]: E0421 10:12:18.074559 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.074594 kubelet[2557]: W0421 10:12:18.074582 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.074776 kubelet[2557]: E0421 10:12:18.074599 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.075531 kubelet[2557]: E0421 10:12:18.075492 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.075531 kubelet[2557]: W0421 10:12:18.075519 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.075684 kubelet[2557]: E0421 10:12:18.075535 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.076092 kubelet[2557]: E0421 10:12:18.076057 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.076092 kubelet[2557]: W0421 10:12:18.076081 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.076207 kubelet[2557]: E0421 10:12:18.076096 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.078025 kubelet[2557]: E0421 10:12:18.077990 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.078025 kubelet[2557]: W0421 10:12:18.078015 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.078102 kubelet[2557]: E0421 10:12:18.078034 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.079750 kubelet[2557]: E0421 10:12:18.079717 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.079750 kubelet[2557]: W0421 10:12:18.079740 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.079871 kubelet[2557]: E0421 10:12:18.079766 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.081770 kubelet[2557]: E0421 10:12:18.081273 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.081770 kubelet[2557]: W0421 10:12:18.081311 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.081770 kubelet[2557]: E0421 10:12:18.081340 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.084666 kubelet[2557]: E0421 10:12:18.082919 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:18.084666 kubelet[2557]: W0421 10:12:18.082940 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:18.084666 kubelet[2557]: E0421 10:12:18.082969 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:18.928670 kubelet[2557]: E0421 10:12:18.928577 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:19.011213 kubelet[2557]: I0421 10:12:19.011094 2557 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:12:19.074090 kubelet[2557]: E0421 10:12:19.074047 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.076404 kubelet[2557]: W0421 10:12:19.076061 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.076404 kubelet[2557]: E0421 10:12:19.076121 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.076970 kubelet[2557]: E0421 10:12:19.076877 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.076970 kubelet[2557]: W0421 10:12:19.076924 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.076970 kubelet[2557]: E0421 10:12:19.076943 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.078217 kubelet[2557]: E0421 10:12:19.077847 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.078217 kubelet[2557]: W0421 10:12:19.077910 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.078217 kubelet[2557]: E0421 10:12:19.077946 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.079663 kubelet[2557]: E0421 10:12:19.079117 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.079663 kubelet[2557]: W0421 10:12:19.079197 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.079663 kubelet[2557]: E0421 10:12:19.079218 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.080433 kubelet[2557]: E0421 10:12:19.080165 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.080433 kubelet[2557]: W0421 10:12:19.080238 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.080433 kubelet[2557]: E0421 10:12:19.080260 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.081044 kubelet[2557]: E0421 10:12:19.081023 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.081359 kubelet[2557]: W0421 10:12:19.081169 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.081359 kubelet[2557]: E0421 10:12:19.081195 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.082149 kubelet[2557]: E0421 10:12:19.081967 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.082149 kubelet[2557]: W0421 10:12:19.081987 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.082149 kubelet[2557]: E0421 10:12:19.082004 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.082980 kubelet[2557]: E0421 10:12:19.082818 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.082980 kubelet[2557]: W0421 10:12:19.082855 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.082980 kubelet[2557]: E0421 10:12:19.082871 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.083842 kubelet[2557]: E0421 10:12:19.083663 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.083842 kubelet[2557]: W0421 10:12:19.083681 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.083842 kubelet[2557]: E0421 10:12:19.083698 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.084595 kubelet[2557]: E0421 10:12:19.084415 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.084595 kubelet[2557]: W0421 10:12:19.084434 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.084595 kubelet[2557]: E0421 10:12:19.084450 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.085233 kubelet[2557]: E0421 10:12:19.085137 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.085233 kubelet[2557]: W0421 10:12:19.085155 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.085233 kubelet[2557]: E0421 10:12:19.085172 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.086191 kubelet[2557]: E0421 10:12:19.086023 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.086191 kubelet[2557]: W0421 10:12:19.086040 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.086191 kubelet[2557]: E0421 10:12:19.086075 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.087252 kubelet[2557]: E0421 10:12:19.087120 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.087252 kubelet[2557]: W0421 10:12:19.087139 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.087252 kubelet[2557]: E0421 10:12:19.087155 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.088128 kubelet[2557]: E0421 10:12:19.087828 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.088128 kubelet[2557]: W0421 10:12:19.087843 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.088128 kubelet[2557]: E0421 10:12:19.087857 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.088669 kubelet[2557]: E0421 10:12:19.088388 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.088669 kubelet[2557]: W0421 10:12:19.088403 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.088669 kubelet[2557]: E0421 10:12:19.088415 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.089333 kubelet[2557]: E0421 10:12:19.089198 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.089333 kubelet[2557]: W0421 10:12:19.089214 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.089333 kubelet[2557]: E0421 10:12:19.089228 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.090153 kubelet[2557]: E0421 10:12:19.090018 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.090153 kubelet[2557]: W0421 10:12:19.090048 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.090153 kubelet[2557]: E0421 10:12:19.090060 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.091066 kubelet[2557]: E0421 10:12:19.090878 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.091066 kubelet[2557]: W0421 10:12:19.090894 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.091066 kubelet[2557]: E0421 10:12:19.090908 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.091723 kubelet[2557]: E0421 10:12:19.091526 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.091723 kubelet[2557]: W0421 10:12:19.091542 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.091723 kubelet[2557]: E0421 10:12:19.091555 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.092352 kubelet[2557]: E0421 10:12:19.092213 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.092352 kubelet[2557]: W0421 10:12:19.092243 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.092352 kubelet[2557]: E0421 10:12:19.092256 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.093078 kubelet[2557]: E0421 10:12:19.092933 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.093078 kubelet[2557]: W0421 10:12:19.092947 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.093078 kubelet[2557]: E0421 10:12:19.092960 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.093758 kubelet[2557]: E0421 10:12:19.093542 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.093758 kubelet[2557]: W0421 10:12:19.093557 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.093758 kubelet[2557]: E0421 10:12:19.093570 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.094361 kubelet[2557]: E0421 10:12:19.094213 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.094361 kubelet[2557]: W0421 10:12:19.094228 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.094361 kubelet[2557]: E0421 10:12:19.094241 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.095053 kubelet[2557]: E0421 10:12:19.094852 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.095053 kubelet[2557]: W0421 10:12:19.094903 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.095053 kubelet[2557]: E0421 10:12:19.094916 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.095655 kubelet[2557]: E0421 10:12:19.095488 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.095655 kubelet[2557]: W0421 10:12:19.095503 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.095655 kubelet[2557]: E0421 10:12:19.095516 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.096444 kubelet[2557]: E0421 10:12:19.096283 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.096444 kubelet[2557]: W0421 10:12:19.096311 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.096444 kubelet[2557]: E0421 10:12:19.096325 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.097090 kubelet[2557]: E0421 10:12:19.096933 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.097090 kubelet[2557]: W0421 10:12:19.096949 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.097090 kubelet[2557]: E0421 10:12:19.096962 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.097756 kubelet[2557]: E0421 10:12:19.097585 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.097756 kubelet[2557]: W0421 10:12:19.097601 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.097756 kubelet[2557]: E0421 10:12:19.097657 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.098659 kubelet[2557]: E0421 10:12:19.098447 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.098659 kubelet[2557]: W0421 10:12:19.098455 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.098659 kubelet[2557]: E0421 10:12:19.098463 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.099153 kubelet[2557]: E0421 10:12:19.099064 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.099153 kubelet[2557]: W0421 10:12:19.099072 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.099153 kubelet[2557]: E0421 10:12:19.099080 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.099686 kubelet[2557]: E0421 10:12:19.099554 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.099686 kubelet[2557]: W0421 10:12:19.099562 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.099686 kubelet[2557]: E0421 10:12:19.099569 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.100065 kubelet[2557]: E0421 10:12:19.099974 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.100065 kubelet[2557]: W0421 10:12:19.099983 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.100065 kubelet[2557]: E0421 10:12:19.099990 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.100326 kubelet[2557]: E0421 10:12:19.100277 2557 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:12:19.100326 kubelet[2557]: W0421 10:12:19.100285 2557 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:12:19.100326 kubelet[2557]: E0421 10:12:19.100313 2557 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:12:19.150030 containerd[1501]: time="2026-04-21T10:12:19.149969815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.151301 containerd[1501]: time="2026-04-21T10:12:19.151167207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:12:19.154054 containerd[1501]: time="2026-04-21T10:12:19.152645383Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.155174 containerd[1501]: time="2026-04-21T10:12:19.155142601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:19.155814 containerd[1501]: time="2026-04-21T10:12:19.155708096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.717955773s" Apr 21 10:12:19.155897 containerd[1501]: time="2026-04-21T10:12:19.155883491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:12:19.160642 containerd[1501]: time="2026-04-21T10:12:19.160590401Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:12:19.176738 containerd[1501]: time="2026-04-21T10:12:19.176693350Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560\"" Apr 21 10:12:19.178662 containerd[1501]: time="2026-04-21T10:12:19.178436407Z" level=info msg="StartContainer for \"c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560\"" Apr 21 10:12:19.213231 systemd[1]: run-containerd-runc-k8s.io-c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560-runc.jrLrHf.mount: Deactivated successfully. Apr 21 10:12:19.218757 systemd[1]: Started cri-containerd-c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560.scope - libcontainer container c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560. Apr 21 10:12:19.244980 containerd[1501]: time="2026-04-21T10:12:19.244926595Z" level=info msg="StartContainer for \"c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560\" returns successfully" Apr 21 10:12:19.256382 systemd[1]: cri-containerd-c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560.scope: Deactivated successfully. Apr 21 10:12:19.369549 containerd[1501]: time="2026-04-21T10:12:19.369483840Z" level=info msg="shim disconnected" id=c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560 namespace=k8s.io Apr 21 10:12:19.369549 containerd[1501]: time="2026-04-21T10:12:19.369535919Z" level=warning msg="cleaning up after shim disconnected" id=c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560 namespace=k8s.io Apr 21 10:12:19.369549 containerd[1501]: time="2026-04-21T10:12:19.369542759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:20.018716 containerd[1501]: time="2026-04-21T10:12:20.018424615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:12:20.171136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76a5a110094752ea05f5e96f4ca657fa614a8a9940accd12434c4bca104e560-rootfs.mount: Deactivated successfully. Apr 21 10:12:20.926795 kubelet[2557]: E0421 10:12:20.926190 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:22.927774 kubelet[2557]: E0421 10:12:22.927730 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:24.013295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3935176349.mount: Deactivated successfully. Apr 21 10:12:24.041553 containerd[1501]: time="2026-04-21T10:12:24.041510487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:24.042472 containerd[1501]: time="2026-04-21T10:12:24.042392028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:12:24.043424 containerd[1501]: time="2026-04-21T10:12:24.043107167Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:24.044633 containerd[1501]: time="2026-04-21T10:12:24.044453188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:24.045442 containerd[1501]: time="2026-04-21T10:12:24.045159445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.026680668s" Apr 21 10:12:24.045442 containerd[1501]: time="2026-04-21T10:12:24.045187176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:12:24.049011 containerd[1501]: time="2026-04-21T10:12:24.048920170Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:12:24.062540 containerd[1501]: time="2026-04-21T10:12:24.062484123Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046\"" Apr 21 10:12:24.063026 containerd[1501]: time="2026-04-21T10:12:24.062864557Z" level=info msg="StartContainer for \"16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046\"" Apr 21 10:12:24.092724 systemd[1]: Started cri-containerd-16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046.scope - libcontainer container 16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046. Apr 21 10:12:24.116906 containerd[1501]: time="2026-04-21T10:12:24.116824368Z" level=info msg="StartContainer for \"16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046\" returns successfully" Apr 21 10:12:24.150743 systemd[1]: cri-containerd-16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046.scope: Deactivated successfully. Apr 21 10:12:24.225372 containerd[1501]: time="2026-04-21T10:12:24.225290366Z" level=info msg="shim disconnected" id=16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046 namespace=k8s.io Apr 21 10:12:24.225372 containerd[1501]: time="2026-04-21T10:12:24.225359820Z" level=warning msg="cleaning up after shim disconnected" id=16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046 namespace=k8s.io Apr 21 10:12:24.225372 containerd[1501]: time="2026-04-21T10:12:24.225367622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:24.236068 containerd[1501]: time="2026-04-21T10:12:24.236026740Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:12:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:12:24.930358 kubelet[2557]: E0421 10:12:24.929523 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:25.013452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16a540b51bf31369fb12531a1e239ea7b6ba861b35be4698ba35c135641b8046-rootfs.mount: Deactivated successfully. Apr 21 10:12:25.031803 containerd[1501]: time="2026-04-21T10:12:25.031756195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:12:25.257748 kubelet[2557]: I0421 10:12:25.256952 2557 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:12:26.927651 kubelet[2557]: E0421 10:12:26.926459 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:27.647499 containerd[1501]: time="2026-04-21T10:12:27.647439873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:27.648629 containerd[1501]: time="2026-04-21T10:12:27.648498700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:12:27.649690 containerd[1501]: time="2026-04-21T10:12:27.649501071Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:27.651651 containerd[1501]: time="2026-04-21T10:12:27.651607927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:27.652122 containerd[1501]: time="2026-04-21T10:12:27.652094090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.62028103s" Apr 21 10:12:27.652160 containerd[1501]: time="2026-04-21T10:12:27.652123835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:12:27.656520 containerd[1501]: time="2026-04-21T10:12:27.656489797Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:12:27.669964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189112783.mount: Deactivated successfully. Apr 21 10:12:27.676793 containerd[1501]: time="2026-04-21T10:12:27.676751567Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57\"" Apr 21 10:12:27.677249 containerd[1501]: time="2026-04-21T10:12:27.677223438Z" level=info msg="StartContainer for \"55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57\"" Apr 21 10:12:27.709774 systemd[1]: Started cri-containerd-55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57.scope - libcontainer container 55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57. Apr 21 10:12:27.735325 containerd[1501]: time="2026-04-21T10:12:27.735273984Z" level=info msg="StartContainer for \"55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57\" returns successfully" Apr 21 10:12:28.219592 systemd[1]: cri-containerd-55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57.scope: Deactivated successfully. Apr 21 10:12:28.255656 containerd[1501]: time="2026-04-21T10:12:28.255541912Z" level=info msg="shim disconnected" id=55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57 namespace=k8s.io Apr 21 10:12:28.255656 containerd[1501]: time="2026-04-21T10:12:28.255604186Z" level=warning msg="cleaning up after shim disconnected" id=55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57 namespace=k8s.io Apr 21 10:12:28.255656 containerd[1501]: time="2026-04-21T10:12:28.255629084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:28.318207 kubelet[2557]: I0421 10:12:28.318160 2557 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 10:12:28.362659 kubelet[2557]: I0421 10:12:28.360795 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc7m2\" (UniqueName: \"kubernetes.io/projected/f6975e89-7bfc-494e-b59a-cbe9de15cd69-kube-api-access-jc7m2\") pod \"calico-kube-controllers-646984bf7c-rlbmt\" (UID: \"f6975e89-7bfc-494e-b59a-cbe9de15cd69\") " pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" Apr 21 10:12:28.362659 kubelet[2557]: I0421 10:12:28.360828 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6975e89-7bfc-494e-b59a-cbe9de15cd69-tigera-ca-bundle\") pod \"calico-kube-controllers-646984bf7c-rlbmt\" (UID: \"f6975e89-7bfc-494e-b59a-cbe9de15cd69\") " pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" Apr 21 10:12:28.369042 systemd[1]: Created slice kubepods-besteffort-podf6975e89_7bfc_494e_b59a_cbe9de15cd69.slice - libcontainer container kubepods-besteffort-podf6975e89_7bfc_494e_b59a_cbe9de15cd69.slice. Apr 21 10:12:28.375716 systemd[1]: Created slice kubepods-burstable-pod9526e420_60fa_46fa_be79_ec21cb169333.slice - libcontainer container kubepods-burstable-pod9526e420_60fa_46fa_be79_ec21cb169333.slice. Apr 21 10:12:28.387592 systemd[1]: Created slice kubepods-besteffort-pod86c26390_629c_4b75_b93f_04ad32bf827a.slice - libcontainer container kubepods-besteffort-pod86c26390_629c_4b75_b93f_04ad32bf827a.slice. Apr 21 10:12:28.393837 systemd[1]: Created slice kubepods-besteffort-pod155836f4_49f6_4af8_bc89_7bc3e4603c89.slice - libcontainer container kubepods-besteffort-pod155836f4_49f6_4af8_bc89_7bc3e4603c89.slice. Apr 21 10:12:28.401674 systemd[1]: Created slice kubepods-besteffort-podf53cdfb7_2232_4081_b646_6f2c33fb2ce3.slice - libcontainer container kubepods-besteffort-podf53cdfb7_2232_4081_b646_6f2c33fb2ce3.slice. Apr 21 10:12:28.408580 systemd[1]: Created slice kubepods-burstable-pod1dab33d7_a2e6_4c4b_8b0e_6b44c82e72e2.slice - libcontainer container kubepods-burstable-pod1dab33d7_a2e6_4c4b_8b0e_6b44c82e72e2.slice. Apr 21 10:12:28.416662 systemd[1]: Created slice kubepods-besteffort-pod58cab4f0_914e_4b34_bf08_20af1325a859.slice - libcontainer container kubepods-besteffort-pod58cab4f0_914e_4b34_bf08_20af1325a859.slice. Apr 21 10:12:28.462014 kubelet[2557]: I0421 10:12:28.461968 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxvv\" (UniqueName: \"kubernetes.io/projected/9526e420-60fa-46fa-be79-ec21cb169333-kube-api-access-jwxvv\") pod \"coredns-7d764666f9-mbg9w\" (UID: \"9526e420-60fa-46fa-be79-ec21cb169333\") " pod="kube-system/coredns-7d764666f9-mbg9w" Apr 21 10:12:28.462157 kubelet[2557]: I0421 10:12:28.462032 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9526e420-60fa-46fa-be79-ec21cb169333-config-volume\") pod \"coredns-7d764666f9-mbg9w\" (UID: \"9526e420-60fa-46fa-be79-ec21cb169333\") " pod="kube-system/coredns-7d764666f9-mbg9w" Apr 21 10:12:28.563413 kubelet[2557]: I0421 10:12:28.563189 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86c26390-629c-4b75-b93f-04ad32bf827a-calico-apiserver-certs\") pod \"calico-apiserver-55967dc964-whqhx\" (UID: \"86c26390-629c-4b75-b93f-04ad32bf827a\") " pod="calico-system/calico-apiserver-55967dc964-whqhx" Apr 21 10:12:28.563413 kubelet[2557]: I0421 10:12:28.563253 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmpd\" (UniqueName: \"kubernetes.io/projected/86c26390-629c-4b75-b93f-04ad32bf827a-kube-api-access-qbmpd\") pod \"calico-apiserver-55967dc964-whqhx\" (UID: \"86c26390-629c-4b75-b93f-04ad32bf827a\") " pod="calico-system/calico-apiserver-55967dc964-whqhx" Apr 21 10:12:28.563413 kubelet[2557]: I0421 10:12:28.563279 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p55j\" (UniqueName: \"kubernetes.io/projected/f53cdfb7-2232-4081-b646-6f2c33fb2ce3-kube-api-access-6p55j\") pod \"goldmane-9f7667bb8-2q75p\" (UID: \"f53cdfb7-2232-4081-b646-6f2c33fb2ce3\") " pod="calico-system/goldmane-9f7667bb8-2q75p" Apr 21 10:12:28.563413 kubelet[2557]: I0421 10:12:28.563304 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58cab4f0-914e-4b34-bf08-20af1325a859-calico-apiserver-certs\") pod \"calico-apiserver-55967dc964-cmfmc\" (UID: \"58cab4f0-914e-4b34-bf08-20af1325a859\") " pod="calico-system/calico-apiserver-55967dc964-cmfmc" Apr 21 10:12:28.563413 kubelet[2557]: I0421 10:12:28.563362 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-backend-key-pair\") pod \"whisker-5dbf5b654b-jzpn9\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " pod="calico-system/whisker-5dbf5b654b-jzpn9" Apr 21 10:12:28.563822 kubelet[2557]: I0421 10:12:28.563397 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2-config-volume\") pod \"coredns-7d764666f9-cxwzz\" (UID: \"1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2\") " pod="kube-system/coredns-7d764666f9-cxwzz" Apr 21 10:12:28.563822 kubelet[2557]: I0421 10:12:28.563419 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f53cdfb7-2232-4081-b646-6f2c33fb2ce3-goldmane-key-pair\") pod \"goldmane-9f7667bb8-2q75p\" (UID: \"f53cdfb7-2232-4081-b646-6f2c33fb2ce3\") " pod="calico-system/goldmane-9f7667bb8-2q75p" Apr 21 10:12:28.563822 kubelet[2557]: I0421 10:12:28.563448 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-ca-bundle\") pod \"whisker-5dbf5b654b-jzpn9\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " pod="calico-system/whisker-5dbf5b654b-jzpn9" Apr 21 10:12:28.563822 kubelet[2557]: I0421 10:12:28.563471 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxkk\" (UniqueName: \"kubernetes.io/projected/1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2-kube-api-access-ckxkk\") pod \"coredns-7d764666f9-cxwzz\" (UID: \"1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2\") " pod="kube-system/coredns-7d764666f9-cxwzz" Apr 21 10:12:28.563822 kubelet[2557]: I0421 10:12:28.563494 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f53cdfb7-2232-4081-b646-6f2c33fb2ce3-config\") pod \"goldmane-9f7667bb8-2q75p\" (UID: \"f53cdfb7-2232-4081-b646-6f2c33fb2ce3\") " pod="calico-system/goldmane-9f7667bb8-2q75p" Apr 21 10:12:28.564053 kubelet[2557]: I0421 10:12:28.563540 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-nginx-config\") pod \"whisker-5dbf5b654b-jzpn9\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " pod="calico-system/whisker-5dbf5b654b-jzpn9" Apr 21 10:12:28.564053 kubelet[2557]: I0421 10:12:28.563590 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-422ft\" (UniqueName: \"kubernetes.io/projected/58cab4f0-914e-4b34-bf08-20af1325a859-kube-api-access-422ft\") pod \"calico-apiserver-55967dc964-cmfmc\" (UID: \"58cab4f0-914e-4b34-bf08-20af1325a859\") " pod="calico-system/calico-apiserver-55967dc964-cmfmc" Apr 21 10:12:28.564053 kubelet[2557]: I0421 10:12:28.563672 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gd5f\" (UniqueName: \"kubernetes.io/projected/155836f4-49f6-4af8-bc89-7bc3e4603c89-kube-api-access-9gd5f\") pod \"whisker-5dbf5b654b-jzpn9\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " pod="calico-system/whisker-5dbf5b654b-jzpn9" Apr 21 10:12:28.564053 kubelet[2557]: I0421 10:12:28.563696 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f53cdfb7-2232-4081-b646-6f2c33fb2ce3-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-2q75p\" (UID: \"f53cdfb7-2232-4081-b646-6f2c33fb2ce3\") " pod="calico-system/goldmane-9f7667bb8-2q75p" Apr 21 10:12:28.674670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55ebd5214a159e08c6ea2d29bb7268899cdf60f7ba3722793fda8688809e0a57-rootfs.mount: Deactivated successfully. Apr 21 10:12:28.713263 containerd[1501]: time="2026-04-21T10:12:28.712835880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646984bf7c-rlbmt,Uid:f6975e89-7bfc-494e-b59a-cbe9de15cd69,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:28.728599 containerd[1501]: time="2026-04-21T10:12:28.728491738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mbg9w,Uid:9526e420-60fa-46fa-be79-ec21cb169333,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:28.826989 containerd[1501]: time="2026-04-21T10:12:28.826565362Z" level=error msg="Failed to destroy network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.827167 containerd[1501]: time="2026-04-21T10:12:28.827130854Z" level=error msg="encountered an error cleaning up failed sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.827760 containerd[1501]: time="2026-04-21T10:12:28.827191485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mbg9w,Uid:9526e420-60fa-46fa-be79-ec21cb169333,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.827798 kubelet[2557]: E0421 10:12:28.827482 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.827798 kubelet[2557]: E0421 10:12:28.827556 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mbg9w" Apr 21 10:12:28.827798 kubelet[2557]: E0421 10:12:28.827571 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mbg9w" Apr 21 10:12:28.827874 kubelet[2557]: E0421 10:12:28.827663 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-mbg9w_kube-system(9526e420-60fa-46fa-be79-ec21cb169333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-mbg9w_kube-system(9526e420-60fa-46fa-be79-ec21cb169333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mbg9w" podUID="9526e420-60fa-46fa-be79-ec21cb169333" Apr 21 10:12:28.829632 containerd[1501]: time="2026-04-21T10:12:28.829585645Z" level=error msg="Failed to destroy network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.829979 containerd[1501]: time="2026-04-21T10:12:28.829960149Z" level=error msg="encountered an error cleaning up failed sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.830068 containerd[1501]: time="2026-04-21T10:12:28.830052919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646984bf7c-rlbmt,Uid:f6975e89-7bfc-494e-b59a-cbe9de15cd69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.830284 kubelet[2557]: E0421 10:12:28.830258 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:28.830338 kubelet[2557]: E0421 10:12:28.830324 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" Apr 21 10:12:28.830338 kubelet[2557]: E0421 10:12:28.830335 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" Apr 21 10:12:28.830409 kubelet[2557]: E0421 10:12:28.830366 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-646984bf7c-rlbmt_calico-system(f6975e89-7bfc-494e-b59a-cbe9de15cd69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-646984bf7c-rlbmt_calico-system(f6975e89-7bfc-494e-b59a-cbe9de15cd69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" podUID="f6975e89-7bfc-494e-b59a-cbe9de15cd69" Apr 21 10:12:28.937580 systemd[1]: Created slice kubepods-besteffort-pod27b58c18_a535_4675_97b1_656bf0345381.slice - libcontainer container kubepods-besteffort-pod27b58c18_a535_4675_97b1_656bf0345381.slice. Apr 21 10:12:28.943376 containerd[1501]: time="2026-04-21T10:12:28.943274685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmvbc,Uid:27b58c18-a535-4675-97b1-656bf0345381,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:28.993217 containerd[1501]: time="2026-04-21T10:12:28.993153647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-whqhx,Uid:86c26390-629c-4b75-b93f-04ad32bf827a,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:28.999562 containerd[1501]: time="2026-04-21T10:12:28.999460022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbf5b654b-jzpn9,Uid:155836f4-49f6-4af8-bc89-7bc3e4603c89,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:29.010816 containerd[1501]: time="2026-04-21T10:12:29.010780911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-2q75p,Uid:f53cdfb7-2232-4081-b646-6f2c33fb2ce3,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:29.019399 containerd[1501]: time="2026-04-21T10:12:29.019353199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxwzz,Uid:1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2,Namespace:kube-system,Attempt:0,}" Apr 21 10:12:29.021590 containerd[1501]: time="2026-04-21T10:12:29.021565322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-cmfmc,Uid:58cab4f0-914e-4b34-bf08-20af1325a859,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:29.033803 containerd[1501]: time="2026-04-21T10:12:29.033755251Z" level=error msg="Failed to destroy network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.035071 containerd[1501]: time="2026-04-21T10:12:29.035002500Z" level=error msg="encountered an error cleaning up failed sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.035169 containerd[1501]: time="2026-04-21T10:12:29.035090322Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmvbc,Uid:27b58c18-a535-4675-97b1-656bf0345381,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.035351 kubelet[2557]: E0421 10:12:29.035290 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.035401 kubelet[2557]: E0421 10:12:29.035368 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:29.035401 kubelet[2557]: E0421 10:12:29.035390 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kmvbc" Apr 21 10:12:29.035468 kubelet[2557]: E0421 10:12:29.035439 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kmvbc_calico-system(27b58c18-a535-4675-97b1-656bf0345381)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kmvbc_calico-system(27b58c18-a535-4675-97b1-656bf0345381)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:29.046687 kubelet[2557]: I0421 10:12:29.046646 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:29.049919 containerd[1501]: time="2026-04-21T10:12:29.048480899Z" level=info msg="StopPodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\"" Apr 21 10:12:29.050530 containerd[1501]: time="2026-04-21T10:12:29.050510167Z" level=info msg="Ensure that sandbox 7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288 in task-service has been cleanup successfully" Apr 21 10:12:29.085704 kubelet[2557]: I0421 10:12:29.083902 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:29.085806 containerd[1501]: time="2026-04-21T10:12:29.085154571Z" level=info msg="StopPodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\"" Apr 21 10:12:29.085806 containerd[1501]: time="2026-04-21T10:12:29.085332410Z" level=info msg="Ensure that sandbox 959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e in task-service has been cleanup successfully" Apr 21 10:12:29.095297 containerd[1501]: time="2026-04-21T10:12:29.095261531Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:12:29.096478 kubelet[2557]: I0421 10:12:29.095883 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:29.099164 containerd[1501]: time="2026-04-21T10:12:29.099047155Z" level=info msg="StopPodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\"" Apr 21 10:12:29.099586 containerd[1501]: time="2026-04-21T10:12:29.099471926Z" level=info msg="Ensure that sandbox 7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6 in task-service has been cleanup successfully" Apr 21 10:12:29.165279 containerd[1501]: time="2026-04-21T10:12:29.165239146Z" level=info msg="CreateContainer within sandbox \"ee92a099c09ba27f04cf77bff0cfda894e51fb5226cdeb8b931a49f522b77af7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7306fb15ffb30c29e4dcc6de634b41694f2dedbc67d0839fd386bc0605f40f25\"" Apr 21 10:12:29.167573 containerd[1501]: time="2026-04-21T10:12:29.167549237Z" level=info msg="StartContainer for \"7306fb15ffb30c29e4dcc6de634b41694f2dedbc67d0839fd386bc0605f40f25\"" Apr 21 10:12:29.187525 containerd[1501]: time="2026-04-21T10:12:29.187468162Z" level=error msg="StopPodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" failed" error="failed to destroy network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.187936 kubelet[2557]: E0421 10:12:29.187766 2557 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:29.187936 kubelet[2557]: E0421 10:12:29.187822 2557 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6"} Apr 21 10:12:29.187936 kubelet[2557]: E0421 10:12:29.187881 2557 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9526e420-60fa-46fa-be79-ec21cb169333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:12:29.187936 kubelet[2557]: E0421 10:12:29.187903 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9526e420-60fa-46fa-be79-ec21cb169333\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mbg9w" podUID="9526e420-60fa-46fa-be79-ec21cb169333" Apr 21 10:12:29.203959 containerd[1501]: time="2026-04-21T10:12:29.203824329Z" level=error msg="Failed to destroy network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.205632 containerd[1501]: time="2026-04-21T10:12:29.204166364Z" level=error msg="encountered an error cleaning up failed sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.205632 containerd[1501]: time="2026-04-21T10:12:29.204210660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-whqhx,Uid:86c26390-629c-4b75-b93f-04ad32bf827a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.205743 kubelet[2557]: E0421 10:12:29.204544 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.205743 kubelet[2557]: E0421 10:12:29.204592 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55967dc964-whqhx" Apr 21 10:12:29.205743 kubelet[2557]: E0421 10:12:29.204607 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55967dc964-whqhx" Apr 21 10:12:29.205820 kubelet[2557]: E0421 10:12:29.205736 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55967dc964-whqhx_calico-system(86c26390-629c-4b75-b93f-04ad32bf827a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55967dc964-whqhx_calico-system(86c26390-629c-4b75-b93f-04ad32bf827a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55967dc964-whqhx" podUID="86c26390-629c-4b75-b93f-04ad32bf827a" Apr 21 10:12:29.226938 containerd[1501]: time="2026-04-21T10:12:29.226888883Z" level=error msg="StopPodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" failed" error="failed to destroy network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.227326 kubelet[2557]: E0421 10:12:29.227126 2557 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:29.227326 kubelet[2557]: E0421 10:12:29.227177 2557 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288"} Apr 21 10:12:29.227326 kubelet[2557]: E0421 10:12:29.227204 2557 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6975e89-7bfc-494e-b59a-cbe9de15cd69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:12:29.227326 kubelet[2557]: E0421 10:12:29.227234 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6975e89-7bfc-494e-b59a-cbe9de15cd69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" podUID="f6975e89-7bfc-494e-b59a-cbe9de15cd69" Apr 21 10:12:29.245368 containerd[1501]: time="2026-04-21T10:12:29.244994035Z" level=error msg="Failed to destroy network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.245127 systemd[1]: Started cri-containerd-7306fb15ffb30c29e4dcc6de634b41694f2dedbc67d0839fd386bc0605f40f25.scope - libcontainer container 7306fb15ffb30c29e4dcc6de634b41694f2dedbc67d0839fd386bc0605f40f25. Apr 21 10:12:29.248269 containerd[1501]: time="2026-04-21T10:12:29.246188285Z" level=error msg="encountered an error cleaning up failed sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.248386 containerd[1501]: time="2026-04-21T10:12:29.248369672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbf5b654b-jzpn9,Uid:155836f4-49f6-4af8-bc89-7bc3e4603c89,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.248656 kubelet[2557]: E0421 10:12:29.248568 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.248764 kubelet[2557]: E0421 10:12:29.248716 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dbf5b654b-jzpn9" Apr 21 10:12:29.248764 kubelet[2557]: E0421 10:12:29.248735 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5dbf5b654b-jzpn9" Apr 21 10:12:29.248899 kubelet[2557]: E0421 10:12:29.248845 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5dbf5b654b-jzpn9_calico-system(155836f4-49f6-4af8-bc89-7bc3e4603c89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5dbf5b654b-jzpn9_calico-system(155836f4-49f6-4af8-bc89-7bc3e4603c89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5dbf5b654b-jzpn9" podUID="155836f4-49f6-4af8-bc89-7bc3e4603c89" Apr 21 10:12:29.256900 containerd[1501]: time="2026-04-21T10:12:29.256744061Z" level=error msg="StopPodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" failed" error="failed to destroy network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.256962 kubelet[2557]: E0421 10:12:29.256913 2557 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:29.256962 kubelet[2557]: E0421 10:12:29.256950 2557 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e"} Apr 21 10:12:29.257016 kubelet[2557]: E0421 10:12:29.256975 2557 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27b58c18-a535-4675-97b1-656bf0345381\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:12:29.257016 kubelet[2557]: E0421 10:12:29.257001 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27b58c18-a535-4675-97b1-656bf0345381\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kmvbc" podUID="27b58c18-a535-4675-97b1-656bf0345381" Apr 21 10:12:29.267493 containerd[1501]: time="2026-04-21T10:12:29.266995268Z" level=error msg="Failed to destroy network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.267584 containerd[1501]: time="2026-04-21T10:12:29.267497574Z" level=error msg="encountered an error cleaning up failed sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.267584 containerd[1501]: time="2026-04-21T10:12:29.267544255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-2q75p,Uid:f53cdfb7-2232-4081-b646-6f2c33fb2ce3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.267788 kubelet[2557]: E0421 10:12:29.267739 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.267788 kubelet[2557]: E0421 10:12:29.267780 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-2q75p" Apr 21 10:12:29.267843 kubelet[2557]: E0421 10:12:29.267811 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-2q75p" Apr 21 10:12:29.267960 kubelet[2557]: E0421 10:12:29.267858 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-2q75p_calico-system(f53cdfb7-2232-4081-b646-6f2c33fb2ce3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-2q75p_calico-system(f53cdfb7-2232-4081-b646-6f2c33fb2ce3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-2q75p" podUID="f53cdfb7-2232-4081-b646-6f2c33fb2ce3" Apr 21 10:12:29.271407 containerd[1501]: time="2026-04-21T10:12:29.271204520Z" level=error msg="Failed to destroy network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.271591 containerd[1501]: time="2026-04-21T10:12:29.271559375Z" level=error msg="encountered an error cleaning up failed sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.271769 containerd[1501]: time="2026-04-21T10:12:29.271607177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxwzz,Uid:1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.271799 kubelet[2557]: E0421 10:12:29.271762 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.271831 kubelet[2557]: E0421 10:12:29.271796 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cxwzz" Apr 21 10:12:29.271831 kubelet[2557]: E0421 10:12:29.271811 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cxwzz" Apr 21 10:12:29.271971 kubelet[2557]: E0421 10:12:29.271846 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-cxwzz_kube-system(1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-cxwzz_kube-system(1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cxwzz" podUID="1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2" Apr 21 10:12:29.279727 containerd[1501]: time="2026-04-21T10:12:29.279673271Z" level=error msg="Failed to destroy network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.280023 containerd[1501]: time="2026-04-21T10:12:29.279989938Z" level=error msg="encountered an error cleaning up failed sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.280060 containerd[1501]: time="2026-04-21T10:12:29.280042879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-cmfmc,Uid:58cab4f0-914e-4b34-bf08-20af1325a859,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.280272 kubelet[2557]: E0421 10:12:29.280232 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:12:29.280333 kubelet[2557]: E0421 10:12:29.280278 2557 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55967dc964-cmfmc" Apr 21 10:12:29.280333 kubelet[2557]: E0421 10:12:29.280295 2557 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-55967dc964-cmfmc" Apr 21 10:12:29.280398 kubelet[2557]: E0421 10:12:29.280353 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55967dc964-cmfmc_calico-system(58cab4f0-914e-4b34-bf08-20af1325a859)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55967dc964-cmfmc_calico-system(58cab4f0-914e-4b34-bf08-20af1325a859)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-55967dc964-cmfmc" podUID="58cab4f0-914e-4b34-bf08-20af1325a859" Apr 21 10:12:29.297969 containerd[1501]: time="2026-04-21T10:12:29.297938775Z" level=info msg="StartContainer for \"7306fb15ffb30c29e4dcc6de634b41694f2dedbc67d0839fd386bc0605f40f25\" returns successfully" Apr 21 10:12:30.118765 kubelet[2557]: I0421 10:12:30.117367 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:30.124803 containerd[1501]: time="2026-04-21T10:12:30.124007744Z" level=info msg="StopPodSandbox for \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\"" Apr 21 10:12:30.124803 containerd[1501]: time="2026-04-21T10:12:30.124241978Z" level=info msg="Ensure that sandbox 0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41 in task-service has been cleanup successfully" Apr 21 10:12:30.131672 kubelet[2557]: I0421 10:12:30.130295 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:30.132139 containerd[1501]: time="2026-04-21T10:12:30.132108137Z" level=info msg="StopPodSandbox for \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\"" Apr 21 10:12:30.132544 containerd[1501]: time="2026-04-21T10:12:30.132415841Z" level=info msg="Ensure that sandbox 47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665 in task-service has been cleanup successfully" Apr 21 10:12:30.139921 kubelet[2557]: I0421 10:12:30.139895 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:30.141163 containerd[1501]: time="2026-04-21T10:12:30.141134165Z" level=info msg="StopPodSandbox for \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\"" Apr 21 10:12:30.141674 containerd[1501]: time="2026-04-21T10:12:30.141659997Z" level=info msg="Ensure that sandbox cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14 in task-service has been cleanup successfully" Apr 21 10:12:30.145128 kubelet[2557]: I0421 10:12:30.144938 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:30.145879 containerd[1501]: time="2026-04-21T10:12:30.145865031Z" level=info msg="StopPodSandbox for \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\"" Apr 21 10:12:30.146746 kubelet[2557]: I0421 10:12:30.146335 2557 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:30.147031 containerd[1501]: time="2026-04-21T10:12:30.147017047Z" level=info msg="Ensure that sandbox ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e in task-service has been cleanup successfully" Apr 21 10:12:30.147873 containerd[1501]: time="2026-04-21T10:12:30.146716303Z" level=info msg="StopPodSandbox for \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\"" Apr 21 10:12:30.148026 containerd[1501]: time="2026-04-21T10:12:30.148006157Z" level=info msg="Ensure that sandbox e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e in task-service has been cleanup successfully" Apr 21 10:12:30.151834 kubelet[2557]: I0421 10:12:30.151784 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-k87nd" podStartSLOduration=2.402770835 podStartE2EDuration="16.151768224s" podCreationTimestamp="2026-04-21 10:12:14 +0000 UTC" firstStartedPulling="2026-04-21 10:12:15.309096716 +0000 UTC m=+18.468775986" lastFinishedPulling="2026-04-21 10:12:29.058094115 +0000 UTC m=+32.217773375" observedRunningTime="2026-04-21 10:12:30.144555929 +0000 UTC m=+33.304235199" watchObservedRunningTime="2026-04-21 10:12:30.151768224 +0000 UTC m=+33.311447514" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.264 [INFO][3851] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.264 [INFO][3851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" iface="eth0" netns="/var/run/netns/cni-1a7aa27e-7f45-8241-fc4a-ed7d407c3b4e" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.264 [INFO][3851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" iface="eth0" netns="/var/run/netns/cni-1a7aa27e-7f45-8241-fc4a-ed7d407c3b4e" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.264 [INFO][3851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" iface="eth0" netns="/var/run/netns/cni-1a7aa27e-7f45-8241-fc4a-ed7d407c3b4e" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.264 [INFO][3851] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.264 [INFO][3851] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.311 [INFO][3892] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.312 [INFO][3892] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.312 [INFO][3892] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.328 [WARNING][3892] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.328 [INFO][3892] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.336 [INFO][3892] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.345109 containerd[1501]: 2026-04-21 10:12:30.341 [INFO][3851] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:30.347445 systemd[1]: run-netns-cni\x2d1a7aa27e\x2d7f45\x2d8241\x2dfc4a\x2ded7d407c3b4e.mount: Deactivated successfully. Apr 21 10:12:30.347929 containerd[1501]: time="2026-04-21T10:12:30.347567315Z" level=info msg="TearDown network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\" successfully" Apr 21 10:12:30.347929 containerd[1501]: time="2026-04-21T10:12:30.347588917Z" level=info msg="StopPodSandbox for \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\" returns successfully" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.258 [INFO][3840] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.258 [INFO][3840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" iface="eth0" netns="/var/run/netns/cni-9a1eb9c5-e1a1-6664-80a1-1ef26e89805d" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.259 [INFO][3840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" iface="eth0" netns="/var/run/netns/cni-9a1eb9c5-e1a1-6664-80a1-1ef26e89805d" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.259 [INFO][3840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" iface="eth0" netns="/var/run/netns/cni-9a1eb9c5-e1a1-6664-80a1-1ef26e89805d" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.259 [INFO][3840] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.259 [INFO][3840] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.312 [INFO][3887] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.313 [INFO][3887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.335 [INFO][3887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.341 [WARNING][3887] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.341 [INFO][3887] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.342 [INFO][3887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.348872 containerd[1501]: 2026-04-21 10:12:30.346 [INFO][3840] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:30.348872 containerd[1501]: time="2026-04-21T10:12:30.348803908Z" level=info msg="TearDown network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\" successfully" Apr 21 10:12:30.348872 containerd[1501]: time="2026-04-21T10:12:30.348830117Z" level=info msg="StopPodSandbox for \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\" returns successfully" Apr 21 10:12:30.354644 containerd[1501]: time="2026-04-21T10:12:30.353733554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-whqhx,Uid:86c26390-629c-4b75-b93f-04ad32bf827a,Namespace:calico-system,Attempt:1,}" Apr 21 10:12:30.354701 systemd[1]: run-netns-cni\x2d9a1eb9c5\x2de1a1\x2d6664\x2d80a1\x2d1ef26e89805d.mount: Deactivated successfully. Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.265 [INFO][3838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.265 [INFO][3838] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" iface="eth0" netns="/var/run/netns/cni-229ef5a2-cc8e-bbe4-29e6-72c2c3249a0a" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.265 [INFO][3838] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" iface="eth0" netns="/var/run/netns/cni-229ef5a2-cc8e-bbe4-29e6-72c2c3249a0a" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.270 [INFO][3838] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" iface="eth0" netns="/var/run/netns/cni-229ef5a2-cc8e-bbe4-29e6-72c2c3249a0a" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.270 [INFO][3838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.270 [INFO][3838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.329 [INFO][3897] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.330 [INFO][3897] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.343 [INFO][3897] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.355 [WARNING][3897] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.355 [INFO][3897] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.356 [INFO][3897] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.368935 containerd[1501]: 2026-04-21 10:12:30.363 [INFO][3838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:30.369843 containerd[1501]: time="2026-04-21T10:12:30.369773587Z" level=info msg="TearDown network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\" successfully" Apr 21 10:12:30.369843 containerd[1501]: time="2026-04-21T10:12:30.369794930Z" level=info msg="StopPodSandbox for \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\" returns successfully" Apr 21 10:12:30.374718 systemd[1]: run-netns-cni\x2d229ef5a2\x2dcc8e\x2dbbe4\x2d29e6\x2d72c2c3249a0a.mount: Deactivated successfully. Apr 21 10:12:30.377772 containerd[1501]: time="2026-04-21T10:12:30.377717134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-2q75p,Uid:f53cdfb7-2232-4081-b646-6f2c33fb2ce3,Namespace:calico-system,Attempt:1,}" Apr 21 10:12:30.378328 kubelet[2557]: I0421 10:12:30.378222 2557 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-nginx-config\" (UniqueName: \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-nginx-config\") pod \"155836f4-49f6-4af8-bc89-7bc3e4603c89\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " Apr 21 10:12:30.378328 kubelet[2557]: I0421 10:12:30.378263 2557 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/155836f4-49f6-4af8-bc89-7bc3e4603c89-kube-api-access-9gd5f\" (UniqueName: \"kubernetes.io/projected/155836f4-49f6-4af8-bc89-7bc3e4603c89-kube-api-access-9gd5f\") pod \"155836f4-49f6-4af8-bc89-7bc3e4603c89\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " Apr 21 10:12:30.378328 kubelet[2557]: I0421 10:12:30.378292 2557 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-backend-key-pair\") pod \"155836f4-49f6-4af8-bc89-7bc3e4603c89\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " Apr 21 10:12:30.378854 kubelet[2557]: I0421 10:12:30.378483 2557 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-ca-bundle\") pod \"155836f4-49f6-4af8-bc89-7bc3e4603c89\" (UID: \"155836f4-49f6-4af8-bc89-7bc3e4603c89\") " Apr 21 10:12:30.378854 kubelet[2557]: I0421 10:12:30.378523 2557 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-nginx-config" pod "155836f4-49f6-4af8-bc89-7bc3e4603c89" (UID: "155836f4-49f6-4af8-bc89-7bc3e4603c89"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.253 [INFO][3837] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.253 [INFO][3837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" iface="eth0" netns="/var/run/netns/cni-4affd096-cb83-298f-15b9-46b0b37c000f" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.254 [INFO][3837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" iface="eth0" netns="/var/run/netns/cni-4affd096-cb83-298f-15b9-46b0b37c000f" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.255 [INFO][3837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" iface="eth0" netns="/var/run/netns/cni-4affd096-cb83-298f-15b9-46b0b37c000f" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.255 [INFO][3837] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.255 [INFO][3837] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.333 [INFO][3882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.333 [INFO][3882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.357 [INFO][3882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.367 [WARNING][3882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.368 [INFO][3882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.369 [INFO][3882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.382357 containerd[1501]: 2026-04-21 10:12:30.375 [INFO][3837] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:30.382357 containerd[1501]: time="2026-04-21T10:12:30.382179416Z" level=info msg="TearDown network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\" successfully" Apr 21 10:12:30.382357 containerd[1501]: time="2026-04-21T10:12:30.382193046Z" level=info msg="StopPodSandbox for \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\" returns successfully" Apr 21 10:12:30.384579 kubelet[2557]: I0421 10:12:30.384423 2557 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-ca-bundle" pod "155836f4-49f6-4af8-bc89-7bc3e4603c89" (UID: "155836f4-49f6-4af8-bc89-7bc3e4603c89"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:12:30.386969 kubelet[2557]: I0421 10:12:30.386754 2557 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-backend-key-pair" pod "155836f4-49f6-4af8-bc89-7bc3e4603c89" (UID: "155836f4-49f6-4af8-bc89-7bc3e4603c89"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:12:30.387486 kubelet[2557]: I0421 10:12:30.387459 2557 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/155836f4-49f6-4af8-bc89-7bc3e4603c89-kube-api-access-9gd5f" pod "155836f4-49f6-4af8-bc89-7bc3e4603c89" (UID: "155836f4-49f6-4af8-bc89-7bc3e4603c89"). InnerVolumeSpecName "kube-api-access-9gd5f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:12:30.388518 containerd[1501]: time="2026-04-21T10:12:30.387804690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxwzz,Uid:1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2,Namespace:kube-system,Attempt:1,}" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.280 [INFO][3839] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.281 [INFO][3839] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" iface="eth0" netns="/var/run/netns/cni-559a446d-a6b8-86a6-6b4d-3c44c3c5f3d0" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.281 [INFO][3839] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" iface="eth0" netns="/var/run/netns/cni-559a446d-a6b8-86a6-6b4d-3c44c3c5f3d0" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.281 [INFO][3839] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" iface="eth0" netns="/var/run/netns/cni-559a446d-a6b8-86a6-6b4d-3c44c3c5f3d0" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.281 [INFO][3839] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.281 [INFO][3839] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.345 [INFO][3902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.345 [INFO][3902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.371 [INFO][3902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.385 [WARNING][3902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.385 [INFO][3902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.387 [INFO][3902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.395641 containerd[1501]: 2026-04-21 10:12:30.391 [INFO][3839] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:30.396020 containerd[1501]: time="2026-04-21T10:12:30.396001619Z" level=info msg="TearDown network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\" successfully" Apr 21 10:12:30.396081 containerd[1501]: time="2026-04-21T10:12:30.396072325Z" level=info msg="StopPodSandbox for \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\" returns successfully" Apr 21 10:12:30.398900 containerd[1501]: time="2026-04-21T10:12:30.398882689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-cmfmc,Uid:58cab4f0-914e-4b34-bf08-20af1325a859,Namespace:calico-system,Attempt:1,}" Apr 21 10:12:30.478844 kubelet[2557]: I0421 10:12:30.478770 2557 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-nginx-config\") on node \"ci-4081-3-7-7-16e5f88171\" DevicePath \"\"" Apr 21 10:12:30.478844 kubelet[2557]: I0421 10:12:30.478795 2557 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9gd5f\" (UniqueName: \"kubernetes.io/projected/155836f4-49f6-4af8-bc89-7bc3e4603c89-kube-api-access-9gd5f\") on node \"ci-4081-3-7-7-16e5f88171\" DevicePath \"\"" Apr 21 10:12:30.478844 kubelet[2557]: I0421 10:12:30.478804 2557 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-backend-key-pair\") on node \"ci-4081-3-7-7-16e5f88171\" DevicePath \"\"" Apr 21 10:12:30.478844 kubelet[2557]: I0421 10:12:30.478811 2557 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/155836f4-49f6-4af8-bc89-7bc3e4603c89-whisker-ca-bundle\") on node \"ci-4081-3-7-7-16e5f88171\" DevicePath \"\"" Apr 21 10:12:30.536833 systemd-networkd[1404]: calie757c9da253: Link UP Apr 21 10:12:30.537386 systemd-networkd[1404]: calie757c9da253: Gained carrier Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.416 [ERROR][3921] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.430 [INFO][3921] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0 calico-apiserver-55967dc964- calico-system 86c26390-629c-4b75-b93f-04ad32bf827a 870 0 2026-04-21 10:12:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55967dc964 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 calico-apiserver-55967dc964-whqhx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie757c9da253 [] [] }} ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.430 [INFO][3921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.478 [INFO][3961] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" HandleID="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.487 [INFO][3961] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" HandleID="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000404fa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"calico-apiserver-55967dc964-whqhx", "timestamp":"2026-04-21 10:12:30.478414138 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000370840)} Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.487 [INFO][3961] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.487 [INFO][3961] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.487 [INFO][3961] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.490 [INFO][3961] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.494 [INFO][3961] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.500 [INFO][3961] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.507 [INFO][3961] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.509 [INFO][3961] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.509 [INFO][3961] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.512 [INFO][3961] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68 Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.517 [INFO][3961] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.522 [INFO][3961] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.65/26] block=192.168.65.64/26 handle="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.522 [INFO][3961] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.65/26] handle="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.522 [INFO][3961] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.554206 containerd[1501]: 2026-04-21 10:12:30.522 [INFO][3961] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.65/26] IPv6=[] ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" HandleID="k8s-pod-network.53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.555261 containerd[1501]: 2026-04-21 10:12:30.525 [INFO][3921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"86c26390-629c-4b75-b93f-04ad32bf827a", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"calico-apiserver-55967dc964-whqhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie757c9da253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.555261 containerd[1501]: 2026-04-21 10:12:30.525 [INFO][3921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.65/32] ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.555261 containerd[1501]: 2026-04-21 10:12:30.525 [INFO][3921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie757c9da253 ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.555261 containerd[1501]: 2026-04-21 10:12:30.540 [INFO][3921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.555261 containerd[1501]: 2026-04-21 10:12:30.540 [INFO][3921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"86c26390-629c-4b75-b93f-04ad32bf827a", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68", Pod:"calico-apiserver-55967dc964-whqhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie757c9da253", MAC:"12:c4:b1:09:46:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.555261 containerd[1501]: 2026-04-21 10:12:30.551 [INFO][3921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68" Namespace="calico-system" Pod="calico-apiserver-55967dc964-whqhx" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:30.588578 containerd[1501]: time="2026-04-21T10:12:30.588315890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:30.588578 containerd[1501]: time="2026-04-21T10:12:30.588374878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:30.588578 containerd[1501]: time="2026-04-21T10:12:30.588385414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:30.588578 containerd[1501]: time="2026-04-21T10:12:30.588455230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:30.617292 systemd[1]: Started cri-containerd-53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68.scope - libcontainer container 53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68. Apr 21 10:12:30.658190 systemd-networkd[1404]: cali12abec3bd27: Link UP Apr 21 10:12:30.659277 systemd-networkd[1404]: cali12abec3bd27: Gained carrier Apr 21 10:12:30.681140 systemd[1]: run-netns-cni\x2d559a446d\x2da6b8\x2d86a6\x2d6b4d\x2d3c44c3c5f3d0.mount: Deactivated successfully. Apr 21 10:12:30.682096 systemd[1]: run-netns-cni\x2d4affd096\x2dcb83\x2d298f\x2d15b9\x2d46b0b37c000f.mount: Deactivated successfully. Apr 21 10:12:30.682723 systemd[1]: var-lib-kubelet-pods-155836f4\x2d49f6\x2d4af8\x2dbc89\x2d7bc3e4603c89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gd5f.mount: Deactivated successfully. Apr 21 10:12:30.682784 systemd[1]: var-lib-kubelet-pods-155836f4\x2d49f6\x2d4af8\x2dbc89\x2d7bc3e4603c89-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.437 [ERROR][3929] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.448 [INFO][3929] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0 goldmane-9f7667bb8- calico-system f53cdfb7-2232-4081-b646-6f2c33fb2ce3 871 0 2026-04-21 10:12:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 goldmane-9f7667bb8-2q75p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali12abec3bd27 [] [] }} ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.448 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.498 [INFO][3969] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" HandleID="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.507 [INFO][3969] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" HandleID="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef750), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"goldmane-9f7667bb8-2q75p", "timestamp":"2026-04-21 10:12:30.49799728 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001ecf20)} Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.507 [INFO][3969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.522 [INFO][3969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.523 [INFO][3969] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.592 [INFO][3969] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.599 [INFO][3969] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.606 [INFO][3969] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.612 [INFO][3969] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.618 [INFO][3969] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.618 [INFO][3969] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.621 [INFO][3969] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4 Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.636 [INFO][3969] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.646 [INFO][3969] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.66/26] block=192.168.65.64/26 handle="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.646 [INFO][3969] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.66/26] handle="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.646 [INFO][3969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.688633 containerd[1501]: 2026-04-21 10:12:30.646 [INFO][3969] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.66/26] IPv6=[] ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" HandleID="k8s-pod-network.c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.689027 containerd[1501]: 2026-04-21 10:12:30.653 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f53cdfb7-2232-4081-b646-6f2c33fb2ce3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"goldmane-9f7667bb8-2q75p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12abec3bd27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.689027 containerd[1501]: 2026-04-21 10:12:30.653 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.66/32] ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.689027 containerd[1501]: 2026-04-21 10:12:30.653 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12abec3bd27 ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.689027 containerd[1501]: 2026-04-21 10:12:30.658 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.689027 containerd[1501]: 2026-04-21 10:12:30.659 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f53cdfb7-2232-4081-b646-6f2c33fb2ce3", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4", Pod:"goldmane-9f7667bb8-2q75p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12abec3bd27", MAC:"6e:65:89:77:06:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.689027 containerd[1501]: 2026-04-21 10:12:30.678 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4" Namespace="calico-system" Pod="goldmane-9f7667bb8-2q75p" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:30.724260 containerd[1501]: time="2026-04-21T10:12:30.723767070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:30.724260 containerd[1501]: time="2026-04-21T10:12:30.723983475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:30.724260 containerd[1501]: time="2026-04-21T10:12:30.724084288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:30.726732 containerd[1501]: time="2026-04-21T10:12:30.726675400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:30.747281 systemd-networkd[1404]: cali3d27a5c4d08: Link UP Apr 21 10:12:30.749205 systemd-networkd[1404]: cali3d27a5c4d08: Gained carrier Apr 21 10:12:30.767801 systemd[1]: Started cri-containerd-c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4.scope - libcontainer container c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4. Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.451 [ERROR][3949] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.468 [INFO][3949] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0 calico-apiserver-55967dc964- calico-system 58cab4f0-914e-4b34-bf08-20af1325a859 872 0 2026-04-21 10:12:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55967dc964 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 calico-apiserver-55967dc964-cmfmc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3d27a5c4d08 [] [] }} ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.468 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.501 [INFO][3975] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" HandleID="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.508 [INFO][3975] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" HandleID="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef570), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"calico-apiserver-55967dc964-cmfmc", "timestamp":"2026-04-21 10:12:30.501951756 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000202f20)} Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.508 [INFO][3975] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.647 [INFO][3975] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.647 [INFO][3975] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.691 [INFO][3975] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.702 [INFO][3975] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.710 [INFO][3975] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.714 [INFO][3975] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.718 [INFO][3975] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.718 [INFO][3975] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.721 [INFO][3975] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337 Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.727 [INFO][3975] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.737 [INFO][3975] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.67/26] block=192.168.65.64/26 handle="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.738 [INFO][3975] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.67/26] handle="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.738 [INFO][3975] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.773714 containerd[1501]: 2026-04-21 10:12:30.738 [INFO][3975] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.67/26] IPv6=[] ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" HandleID="k8s-pod-network.e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.774948 containerd[1501]: 2026-04-21 10:12:30.740 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"58cab4f0-914e-4b34-bf08-20af1325a859", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"calico-apiserver-55967dc964-cmfmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d27a5c4d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.774948 containerd[1501]: 2026-04-21 10:12:30.740 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.67/32] ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.774948 containerd[1501]: 2026-04-21 10:12:30.740 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d27a5c4d08 ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.774948 containerd[1501]: 2026-04-21 10:12:30.748 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.774948 containerd[1501]: 2026-04-21 10:12:30.750 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"58cab4f0-914e-4b34-bf08-20af1325a859", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337", Pod:"calico-apiserver-55967dc964-cmfmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d27a5c4d08", MAC:"0a:d2:ab:ef:86:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.774948 containerd[1501]: 2026-04-21 10:12:30.766 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337" Namespace="calico-system" Pod="calico-apiserver-55967dc964-cmfmc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:30.801060 containerd[1501]: time="2026-04-21T10:12:30.800975269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:30.805637 containerd[1501]: time="2026-04-21T10:12:30.801724459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:30.805637 containerd[1501]: time="2026-04-21T10:12:30.801739091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:30.805637 containerd[1501]: time="2026-04-21T10:12:30.801809626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:30.833180 containerd[1501]: time="2026-04-21T10:12:30.833150810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-whqhx,Uid:86c26390-629c-4b75-b93f-04ad32bf827a,Namespace:calico-system,Attempt:1,} returns sandbox id \"53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68\"" Apr 21 10:12:30.838248 containerd[1501]: time="2026-04-21T10:12:30.838073386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:12:30.842266 systemd[1]: Started cri-containerd-e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337.scope - libcontainer container e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337. Apr 21 10:12:30.928490 systemd-networkd[1404]: cali25023fffc23: Link UP Apr 21 10:12:30.930259 systemd-networkd[1404]: cali25023fffc23: Gained carrier Apr 21 10:12:30.950568 systemd[1]: Removed slice kubepods-besteffort-pod155836f4_49f6_4af8_bc89_7bc3e4603c89.slice - libcontainer container kubepods-besteffort-pod155836f4_49f6_4af8_bc89_7bc3e4603c89.slice. Apr 21 10:12:30.954398 containerd[1501]: time="2026-04-21T10:12:30.954040013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-2q75p,Uid:f53cdfb7-2232-4081-b646-6f2c33fb2ce3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4\"" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.459 [ERROR][3939] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.473 [INFO][3939] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0 coredns-7d764666f9- kube-system 1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2 868 0 2026-04-21 10:12:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 coredns-7d764666f9-cxwzz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali25023fffc23 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.473 [INFO][3939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.517 [INFO][3979] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" HandleID="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.523 [INFO][3979] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" HandleID="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f0440), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"coredns-7d764666f9-cxwzz", "timestamp":"2026-04-21 10:12:30.517725559 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.523 [INFO][3979] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.739 [INFO][3979] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.739 [INFO][3979] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.794 [INFO][3979] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.826 [INFO][3979] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.844 [INFO][3979] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.852 [INFO][3979] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.861 [INFO][3979] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.861 [INFO][3979] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.864 [INFO][3979] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37 Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.879 [INFO][3979] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.890 [INFO][3979] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.68/26] block=192.168.65.64/26 handle="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.890 [INFO][3979] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.68/26] handle="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.890 [INFO][3979] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:30.979412 containerd[1501]: 2026-04-21 10:12:30.890 [INFO][3979] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.68/26] IPv6=[] ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" HandleID="k8s-pod-network.fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.979859 containerd[1501]: 2026-04-21 10:12:30.899 [INFO][3939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"coredns-7d764666f9-cxwzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25023fffc23", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.979859 containerd[1501]: 2026-04-21 10:12:30.900 [INFO][3939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.68/32] ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.979859 containerd[1501]: 2026-04-21 10:12:30.900 [INFO][3939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali25023fffc23 ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.979859 containerd[1501]: 2026-04-21 10:12:30.933 [INFO][3939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:30.979859 containerd[1501]: 2026-04-21 10:12:30.934 [INFO][3939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37", Pod:"coredns-7d764666f9-cxwzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25023fffc23", MAC:"e6:e3:3c:5f:b6:e6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:30.980001 containerd[1501]: 2026-04-21 10:12:30.970 [INFO][3939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37" Namespace="kube-system" Pod="coredns-7d764666f9-cxwzz" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:31.008421 containerd[1501]: time="2026-04-21T10:12:31.007665815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:31.008421 containerd[1501]: time="2026-04-21T10:12:31.007741659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:31.008421 containerd[1501]: time="2026-04-21T10:12:31.007753197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:31.008421 containerd[1501]: time="2026-04-21T10:12:31.007834178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:31.030191 systemd[1]: Started cri-containerd-fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37.scope - libcontainer container fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37. Apr 21 10:12:31.060692 containerd[1501]: time="2026-04-21T10:12:31.060565806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55967dc964-cmfmc,Uid:58cab4f0-914e-4b34-bf08-20af1325a859,Namespace:calico-system,Attempt:1,} returns sandbox id \"e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337\"" Apr 21 10:12:31.095744 containerd[1501]: time="2026-04-21T10:12:31.093791506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cxwzz,Uid:1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37\"" Apr 21 10:12:31.106826 containerd[1501]: time="2026-04-21T10:12:31.106702040Z" level=info msg="CreateContainer within sandbox \"fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:12:31.121854 containerd[1501]: time="2026-04-21T10:12:31.121205033Z" level=info msg="CreateContainer within sandbox \"fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d92461e38cdd7d5fd51cf1311655c5418404241410c961e523785366128718e6\"" Apr 21 10:12:31.122957 containerd[1501]: time="2026-04-21T10:12:31.122886445Z" level=info msg="StartContainer for \"d92461e38cdd7d5fd51cf1311655c5418404241410c961e523785366128718e6\"" Apr 21 10:12:31.186743 systemd[1]: Started cri-containerd-d92461e38cdd7d5fd51cf1311655c5418404241410c961e523785366128718e6.scope - libcontainer container d92461e38cdd7d5fd51cf1311655c5418404241410c961e523785366128718e6. Apr 21 10:12:31.235370 systemd[1]: Created slice kubepods-besteffort-podff0e1526_c823_47e2_8293_d941fc49e84e.slice - libcontainer container kubepods-besteffort-podff0e1526_c823_47e2_8293_d941fc49e84e.slice. Apr 21 10:12:31.261199 containerd[1501]: time="2026-04-21T10:12:31.261167345Z" level=info msg="StartContainer for \"d92461e38cdd7d5fd51cf1311655c5418404241410c961e523785366128718e6\" returns successfully" Apr 21 10:12:31.286796 kubelet[2557]: I0421 10:12:31.286634 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq9xh\" (UniqueName: \"kubernetes.io/projected/ff0e1526-c823-47e2-8293-d941fc49e84e-kube-api-access-mq9xh\") pod \"whisker-77cf95ddbc-d8nd9\" (UID: \"ff0e1526-c823-47e2-8293-d941fc49e84e\") " pod="calico-system/whisker-77cf95ddbc-d8nd9" Apr 21 10:12:31.286796 kubelet[2557]: I0421 10:12:31.286664 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ff0e1526-c823-47e2-8293-d941fc49e84e-nginx-config\") pod \"whisker-77cf95ddbc-d8nd9\" (UID: \"ff0e1526-c823-47e2-8293-d941fc49e84e\") " pod="calico-system/whisker-77cf95ddbc-d8nd9" Apr 21 10:12:31.286796 kubelet[2557]: I0421 10:12:31.286679 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff0e1526-c823-47e2-8293-d941fc49e84e-whisker-ca-bundle\") pod \"whisker-77cf95ddbc-d8nd9\" (UID: \"ff0e1526-c823-47e2-8293-d941fc49e84e\") " pod="calico-system/whisker-77cf95ddbc-d8nd9" Apr 21 10:12:31.286796 kubelet[2557]: I0421 10:12:31.286691 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff0e1526-c823-47e2-8293-d941fc49e84e-whisker-backend-key-pair\") pod \"whisker-77cf95ddbc-d8nd9\" (UID: \"ff0e1526-c823-47e2-8293-d941fc49e84e\") " pod="calico-system/whisker-77cf95ddbc-d8nd9" Apr 21 10:12:31.370655 kernel: calico-node[4037]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:12:31.543022 containerd[1501]: time="2026-04-21T10:12:31.542911291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77cf95ddbc-d8nd9,Uid:ff0e1526-c823-47e2-8293-d941fc49e84e,Namespace:calico-system,Attempt:0,}" Apr 21 10:12:31.676969 systemd-networkd[1404]: cali0e662701956: Link UP Apr 21 10:12:31.677844 systemd-networkd[1404]: cali0e662701956: Gained carrier Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.589 [INFO][4378] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0 whisker-77cf95ddbc- calico-system ff0e1526-c823-47e2-8293-d941fc49e84e 908 0 2026-04-21 10:12:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77cf95ddbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 whisker-77cf95ddbc-d8nd9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0e662701956 [] [] }} ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.589 [INFO][4378] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.628 [INFO][4391] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" HandleID="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.635 [INFO][4391] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" HandleID="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"whisker-77cf95ddbc-d8nd9", "timestamp":"2026-04-21 10:12:31.628878314 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000282f20)} Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.635 [INFO][4391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.635 [INFO][4391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.635 [INFO][4391] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.637 [INFO][4391] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.641 [INFO][4391] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.645 [INFO][4391] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.647 [INFO][4391] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.649 [INFO][4391] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.649 [INFO][4391] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.653 [INFO][4391] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.658 [INFO][4391] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.662 [INFO][4391] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.69/26] block=192.168.65.64/26 handle="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.662 [INFO][4391] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.69/26] handle="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.662 [INFO][4391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:31.694689 containerd[1501]: 2026-04-21 10:12:31.662 [INFO][4391] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.69/26] IPv6=[] ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" HandleID="k8s-pod-network.f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.695091 containerd[1501]: 2026-04-21 10:12:31.665 [INFO][4378] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0", GenerateName:"whisker-77cf95ddbc-", Namespace:"calico-system", SelfLink:"", UID:"ff0e1526-c823-47e2-8293-d941fc49e84e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77cf95ddbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"whisker-77cf95ddbc-d8nd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e662701956", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:31.695091 containerd[1501]: 2026-04-21 10:12:31.665 [INFO][4378] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.69/32] ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.695091 containerd[1501]: 2026-04-21 10:12:31.665 [INFO][4378] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e662701956 ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.695091 containerd[1501]: 2026-04-21 10:12:31.681 [INFO][4378] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.695091 containerd[1501]: 2026-04-21 10:12:31.681 [INFO][4378] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0", GenerateName:"whisker-77cf95ddbc-", Namespace:"calico-system", SelfLink:"", UID:"ff0e1526-c823-47e2-8293-d941fc49e84e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77cf95ddbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea", Pod:"whisker-77cf95ddbc-d8nd9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e662701956", MAC:"32:4a:d4:92:be:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:31.695091 containerd[1501]: 2026-04-21 10:12:31.689 [INFO][4378] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea" Namespace="calico-system" Pod="whisker-77cf95ddbc-d8nd9" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--77cf95ddbc--d8nd9-eth0" Apr 21 10:12:31.710605 containerd[1501]: time="2026-04-21T10:12:31.710479771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:31.710605 containerd[1501]: time="2026-04-21T10:12:31.710583527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:31.711210 containerd[1501]: time="2026-04-21T10:12:31.711095979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:31.711257 containerd[1501]: time="2026-04-21T10:12:31.711193837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:31.734356 systemd[1]: run-containerd-runc-k8s.io-f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea-runc.qgpUIX.mount: Deactivated successfully. Apr 21 10:12:31.742720 systemd[1]: Started cri-containerd-f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea.scope - libcontainer container f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea. Apr 21 10:12:31.779930 containerd[1501]: time="2026-04-21T10:12:31.779825190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77cf95ddbc-d8nd9,Uid:ff0e1526-c823-47e2-8293-d941fc49e84e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea\"" Apr 21 10:12:31.866501 systemd-networkd[1404]: vxlan.calico: Link UP Apr 21 10:12:31.868162 systemd-networkd[1404]: vxlan.calico: Gained carrier Apr 21 10:12:31.962831 systemd-networkd[1404]: cali3d27a5c4d08: Gained IPv6LL Apr 21 10:12:32.091069 systemd-networkd[1404]: cali12abec3bd27: Gained IPv6LL Apr 21 10:12:32.176374 kubelet[2557]: I0421 10:12:32.175883 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cxwzz" podStartSLOduration=28.175869107 podStartE2EDuration="28.175869107s" podCreationTimestamp="2026-04-21 10:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:32.175408915 +0000 UTC m=+35.335088175" watchObservedRunningTime="2026-04-21 10:12:32.175869107 +0000 UTC m=+35.335548377" Apr 21 10:12:32.283998 systemd-networkd[1404]: calie757c9da253: Gained IPv6LL Apr 21 10:12:32.411543 systemd-networkd[1404]: cali25023fffc23: Gained IPv6LL Apr 21 10:12:32.794806 systemd-networkd[1404]: cali0e662701956: Gained IPv6LL Apr 21 10:12:32.928235 kubelet[2557]: I0421 10:12:32.928185 2557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="155836f4-49f6-4af8-bc89-7bc3e4603c89" path="/var/lib/kubelet/pods/155836f4-49f6-4af8-bc89-7bc3e4603c89/volumes" Apr 21 10:12:33.323459 containerd[1501]: time="2026-04-21T10:12:33.323400797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:33.324606 containerd[1501]: time="2026-04-21T10:12:33.324562716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:12:33.325792 containerd[1501]: time="2026-04-21T10:12:33.325753459Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:33.327349 containerd[1501]: time="2026-04-21T10:12:33.327318224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:33.328246 containerd[1501]: time="2026-04-21T10:12:33.327827691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.489729338s" Apr 21 10:12:33.328246 containerd[1501]: time="2026-04-21T10:12:33.327857025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:12:33.328791 containerd[1501]: time="2026-04-21T10:12:33.328777611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:12:33.332693 containerd[1501]: time="2026-04-21T10:12:33.332673736Z" level=info msg="CreateContainer within sandbox \"53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:12:33.346622 containerd[1501]: time="2026-04-21T10:12:33.346576084Z" level=info msg="CreateContainer within sandbox \"53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c2160058ae431ee123fa5358cf0d8e08d37e0d7fe828a0fd75152a779220a723\"" Apr 21 10:12:33.347134 containerd[1501]: time="2026-04-21T10:12:33.347082277Z" level=info msg="StartContainer for \"c2160058ae431ee123fa5358cf0d8e08d37e0d7fe828a0fd75152a779220a723\"" Apr 21 10:12:33.395740 systemd[1]: Started cri-containerd-c2160058ae431ee123fa5358cf0d8e08d37e0d7fe828a0fd75152a779220a723.scope - libcontainer container c2160058ae431ee123fa5358cf0d8e08d37e0d7fe828a0fd75152a779220a723. Apr 21 10:12:33.431799 containerd[1501]: time="2026-04-21T10:12:33.431756914Z" level=info msg="StartContainer for \"c2160058ae431ee123fa5358cf0d8e08d37e0d7fe828a0fd75152a779220a723\" returns successfully" Apr 21 10:12:33.882813 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Apr 21 10:12:34.180783 kubelet[2557]: I0421 10:12:34.180450 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-55967dc964-whqhx" podStartSLOduration=17.686300575 podStartE2EDuration="20.17933436s" podCreationTimestamp="2026-04-21 10:12:14 +0000 UTC" firstStartedPulling="2026-04-21 10:12:30.835635453 +0000 UTC m=+33.995314713" lastFinishedPulling="2026-04-21 10:12:33.328669238 +0000 UTC m=+36.488348498" observedRunningTime="2026-04-21 10:12:34.1791549 +0000 UTC m=+37.338834160" watchObservedRunningTime="2026-04-21 10:12:34.17933436 +0000 UTC m=+37.339013620" Apr 21 10:12:35.174095 kubelet[2557]: I0421 10:12:35.174068 2557 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:12:35.447605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1361204372.mount: Deactivated successfully. Apr 21 10:12:36.445799 containerd[1501]: time="2026-04-21T10:12:36.445743582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:36.446941 containerd[1501]: time="2026-04-21T10:12:36.446765379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:12:36.447998 containerd[1501]: time="2026-04-21T10:12:36.447973567Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:36.450344 containerd[1501]: time="2026-04-21T10:12:36.449656139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:36.450344 containerd[1501]: time="2026-04-21T10:12:36.450175460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.121029935s" Apr 21 10:12:36.450344 containerd[1501]: time="2026-04-21T10:12:36.450195971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:12:36.452052 containerd[1501]: time="2026-04-21T10:12:36.452029239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:12:36.455570 containerd[1501]: time="2026-04-21T10:12:36.455546220Z" level=info msg="CreateContainer within sandbox \"c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:12:36.467328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617622174.mount: Deactivated successfully. Apr 21 10:12:36.478162 containerd[1501]: time="2026-04-21T10:12:36.477720922Z" level=info msg="CreateContainer within sandbox \"c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"8a730f1516c0b067939c9e360e21a52fea2745cec1e2000baec57b32e561fc4e\"" Apr 21 10:12:36.478341 containerd[1501]: time="2026-04-21T10:12:36.478321776Z" level=info msg="StartContainer for \"8a730f1516c0b067939c9e360e21a52fea2745cec1e2000baec57b32e561fc4e\"" Apr 21 10:12:36.507845 systemd[1]: Started cri-containerd-8a730f1516c0b067939c9e360e21a52fea2745cec1e2000baec57b32e561fc4e.scope - libcontainer container 8a730f1516c0b067939c9e360e21a52fea2745cec1e2000baec57b32e561fc4e. Apr 21 10:12:36.548803 containerd[1501]: time="2026-04-21T10:12:36.548756466Z" level=info msg="StartContainer for \"8a730f1516c0b067939c9e360e21a52fea2745cec1e2000baec57b32e561fc4e\" returns successfully" Apr 21 10:12:36.990828 containerd[1501]: time="2026-04-21T10:12:36.990658042Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:36.992574 containerd[1501]: time="2026-04-21T10:12:36.991948783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:12:36.996932 containerd[1501]: time="2026-04-21T10:12:36.996876136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 544.816021ms" Apr 21 10:12:36.996932 containerd[1501]: time="2026-04-21T10:12:36.996926122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:12:36.999027 containerd[1501]: time="2026-04-21T10:12:36.998831528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:12:37.003808 containerd[1501]: time="2026-04-21T10:12:37.003755977Z" level=info msg="CreateContainer within sandbox \"e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:12:37.033042 containerd[1501]: time="2026-04-21T10:12:37.032876632Z" level=info msg="CreateContainer within sandbox \"e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff039de546ac66830b831e6035c5dc4b0c1490bb9671cda12440325b467edcc6\"" Apr 21 10:12:37.034262 containerd[1501]: time="2026-04-21T10:12:37.034102625Z" level=info msg="StartContainer for \"ff039de546ac66830b831e6035c5dc4b0c1490bb9671cda12440325b467edcc6\"" Apr 21 10:12:37.076021 systemd[1]: Started cri-containerd-ff039de546ac66830b831e6035c5dc4b0c1490bb9671cda12440325b467edcc6.scope - libcontainer container ff039de546ac66830b831e6035c5dc4b0c1490bb9671cda12440325b467edcc6. Apr 21 10:12:37.158663 containerd[1501]: time="2026-04-21T10:12:37.157790824Z" level=info msg="StartContainer for \"ff039de546ac66830b831e6035c5dc4b0c1490bb9671cda12440325b467edcc6\" returns successfully" Apr 21 10:12:37.201039 kubelet[2557]: I0421 10:12:37.200321 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-55967dc964-cmfmc" podStartSLOduration=17.264516865 podStartE2EDuration="23.200206088s" podCreationTimestamp="2026-04-21 10:12:14 +0000 UTC" firstStartedPulling="2026-04-21 10:12:31.062147078 +0000 UTC m=+34.221826348" lastFinishedPulling="2026-04-21 10:12:36.997836281 +0000 UTC m=+40.157515571" observedRunningTime="2026-04-21 10:12:37.199790692 +0000 UTC m=+40.359469963" watchObservedRunningTime="2026-04-21 10:12:37.200206088 +0000 UTC m=+40.359885358" Apr 21 10:12:37.223646 kubelet[2557]: I0421 10:12:37.222964 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-2q75p" podStartSLOduration=17.727545897 podStartE2EDuration="23.222950651s" podCreationTimestamp="2026-04-21 10:12:14 +0000 UTC" firstStartedPulling="2026-04-21 10:12:30.955675507 +0000 UTC m=+34.115354767" lastFinishedPulling="2026-04-21 10:12:36.451080251 +0000 UTC m=+39.610759521" observedRunningTime="2026-04-21 10:12:37.222814086 +0000 UTC m=+40.382493356" watchObservedRunningTime="2026-04-21 10:12:37.222950651 +0000 UTC m=+40.382629911" Apr 21 10:12:38.191913 kubelet[2557]: I0421 10:12:38.191837 2557 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:12:38.799633 containerd[1501]: time="2026-04-21T10:12:38.799550946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:38.800561 containerd[1501]: time="2026-04-21T10:12:38.800452363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:12:38.801371 containerd[1501]: time="2026-04-21T10:12:38.801317664Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:38.803038 containerd[1501]: time="2026-04-21T10:12:38.803008918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:38.803483 containerd[1501]: time="2026-04-21T10:12:38.803459255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.804579964s" Apr 21 10:12:38.803515 containerd[1501]: time="2026-04-21T10:12:38.803485465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:12:38.806776 containerd[1501]: time="2026-04-21T10:12:38.806752277Z" level=info msg="CreateContainer within sandbox \"f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:12:38.829761 containerd[1501]: time="2026-04-21T10:12:38.829726110Z" level=info msg="CreateContainer within sandbox \"f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"e465c6e56ba277ad64ae9f68b823790b16981fb6ecd50e01088137f3bfc3024f\"" Apr 21 10:12:38.830216 containerd[1501]: time="2026-04-21T10:12:38.830197690Z" level=info msg="StartContainer for \"e465c6e56ba277ad64ae9f68b823790b16981fb6ecd50e01088137f3bfc3024f\"" Apr 21 10:12:38.858836 systemd[1]: run-containerd-runc-k8s.io-e465c6e56ba277ad64ae9f68b823790b16981fb6ecd50e01088137f3bfc3024f-runc.qrUl1d.mount: Deactivated successfully. Apr 21 10:12:38.866748 systemd[1]: Started cri-containerd-e465c6e56ba277ad64ae9f68b823790b16981fb6ecd50e01088137f3bfc3024f.scope - libcontainer container e465c6e56ba277ad64ae9f68b823790b16981fb6ecd50e01088137f3bfc3024f. Apr 21 10:12:38.898756 containerd[1501]: time="2026-04-21T10:12:38.898547479Z" level=info msg="StartContainer for \"e465c6e56ba277ad64ae9f68b823790b16981fb6ecd50e01088137f3bfc3024f\" returns successfully" Apr 21 10:12:38.901398 containerd[1501]: time="2026-04-21T10:12:38.901293178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:12:39.929336 containerd[1501]: time="2026-04-21T10:12:39.928962579Z" level=info msg="StopPodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\"" Apr 21 10:12:39.931262 containerd[1501]: time="2026-04-21T10:12:39.930064627Z" level=info msg="StopPodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\"" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.003 [INFO][4824] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.004 [INFO][4824] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" iface="eth0" netns="/var/run/netns/cni-67e1ea07-b610-2d33-d6b5-c7eaec808288" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.005 [INFO][4824] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" iface="eth0" netns="/var/run/netns/cni-67e1ea07-b610-2d33-d6b5-c7eaec808288" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.005 [INFO][4824] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" iface="eth0" netns="/var/run/netns/cni-67e1ea07-b610-2d33-d6b5-c7eaec808288" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.005 [INFO][4824] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.005 [INFO][4824] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.032 [INFO][4836] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.032 [INFO][4836] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.032 [INFO][4836] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.037 [WARNING][4836] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.037 [INFO][4836] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.038 [INFO][4836] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:40.041737 containerd[1501]: 2026-04-21 10:12:40.039 [INFO][4824] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:40.043045 containerd[1501]: time="2026-04-21T10:12:40.042685295Z" level=info msg="TearDown network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" successfully" Apr 21 10:12:40.043045 containerd[1501]: time="2026-04-21T10:12:40.042710513Z" level=info msg="StopPodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" returns successfully" Apr 21 10:12:40.045696 systemd[1]: run-netns-cni\x2d67e1ea07\x2db610\x2d2d33\x2dd6b5\x2dc7eaec808288.mount: Deactivated successfully. Apr 21 10:12:40.047935 containerd[1501]: time="2026-04-21T10:12:40.047857503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646984bf7c-rlbmt,Uid:f6975e89-7bfc-494e-b59a-cbe9de15cd69,Namespace:calico-system,Attempt:1,}" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.010 [INFO][4823] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.011 [INFO][4823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" iface="eth0" netns="/var/run/netns/cni-f7c0a06b-7291-7f7a-72c0-29707b51b6db" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.011 [INFO][4823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" iface="eth0" netns="/var/run/netns/cni-f7c0a06b-7291-7f7a-72c0-29707b51b6db" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.012 [INFO][4823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" iface="eth0" netns="/var/run/netns/cni-f7c0a06b-7291-7f7a-72c0-29707b51b6db" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.012 [INFO][4823] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.012 [INFO][4823] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.035 [INFO][4841] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.036 [INFO][4841] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.038 [INFO][4841] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.045 [WARNING][4841] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.045 [INFO][4841] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.047 [INFO][4841] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:40.051473 containerd[1501]: 2026-04-21 10:12:40.049 [INFO][4823] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:40.052107 containerd[1501]: time="2026-04-21T10:12:40.051814392Z" level=info msg="TearDown network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" successfully" Apr 21 10:12:40.052107 containerd[1501]: time="2026-04-21T10:12:40.051832379Z" level=info msg="StopPodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" returns successfully" Apr 21 10:12:40.054715 containerd[1501]: time="2026-04-21T10:12:40.054695724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mbg9w,Uid:9526e420-60fa-46fa-be79-ec21cb169333,Namespace:kube-system,Attempt:1,}" Apr 21 10:12:40.055073 systemd[1]: run-netns-cni\x2df7c0a06b\x2d7291\x2d7f7a\x2d72c0\x2d29707b51b6db.mount: Deactivated successfully. Apr 21 10:12:40.156526 systemd-networkd[1404]: calia75ad3feb12: Link UP Apr 21 10:12:40.157571 systemd-networkd[1404]: calia75ad3feb12: Gained carrier Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.094 [INFO][4849] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0 calico-kube-controllers-646984bf7c- calico-system f6975e89-7bfc-494e-b59a-cbe9de15cd69 968 0 2026-04-21 10:12:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:646984bf7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 calico-kube-controllers-646984bf7c-rlbmt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia75ad3feb12 [] [] }} ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.094 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.117 [INFO][4873] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" HandleID="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.124 [INFO][4873] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" HandleID="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"calico-kube-controllers-646984bf7c-rlbmt", "timestamp":"2026-04-21 10:12:40.117046293 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.124 [INFO][4873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.125 [INFO][4873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.125 [INFO][4873] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.126 [INFO][4873] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.130 [INFO][4873] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.137 [INFO][4873] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.139 [INFO][4873] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.140 [INFO][4873] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.140 [INFO][4873] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.142 [INFO][4873] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428 Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.147 [INFO][4873] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.151 [INFO][4873] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.70/26] block=192.168.65.64/26 handle="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.151 [INFO][4873] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.70/26] handle="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.151 [INFO][4873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:40.177821 containerd[1501]: 2026-04-21 10:12:40.151 [INFO][4873] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.70/26] IPv6=[] ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" HandleID="k8s-pod-network.84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.178944 containerd[1501]: 2026-04-21 10:12:40.154 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0", GenerateName:"calico-kube-controllers-646984bf7c-", Namespace:"calico-system", SelfLink:"", UID:"f6975e89-7bfc-494e-b59a-cbe9de15cd69", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646984bf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"calico-kube-controllers-646984bf7c-rlbmt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia75ad3feb12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:40.178944 containerd[1501]: 2026-04-21 10:12:40.154 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.70/32] ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.178944 containerd[1501]: 2026-04-21 10:12:40.154 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia75ad3feb12 ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.178944 containerd[1501]: 2026-04-21 10:12:40.157 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.178944 containerd[1501]: 2026-04-21 10:12:40.157 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0", GenerateName:"calico-kube-controllers-646984bf7c-", Namespace:"calico-system", SelfLink:"", UID:"f6975e89-7bfc-494e-b59a-cbe9de15cd69", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646984bf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428", Pod:"calico-kube-controllers-646984bf7c-rlbmt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia75ad3feb12", MAC:"c6:2e:ea:61:f9:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:40.178944 containerd[1501]: 2026-04-21 10:12:40.169 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428" Namespace="calico-system" Pod="calico-kube-controllers-646984bf7c-rlbmt" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:40.200600 containerd[1501]: time="2026-04-21T10:12:40.200329115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:40.200600 containerd[1501]: time="2026-04-21T10:12:40.200373703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:40.200600 containerd[1501]: time="2026-04-21T10:12:40.200383077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:40.200600 containerd[1501]: time="2026-04-21T10:12:40.200459492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:40.217734 systemd[1]: Started cri-containerd-84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428.scope - libcontainer container 84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428. Apr 21 10:12:40.262511 systemd-networkd[1404]: cali63c460eb984: Link UP Apr 21 10:12:40.263884 systemd-networkd[1404]: cali63c460eb984: Gained carrier Apr 21 10:12:40.268556 containerd[1501]: time="2026-04-21T10:12:40.268336079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646984bf7c-rlbmt,Uid:f6975e89-7bfc-494e-b59a-cbe9de15cd69,Namespace:calico-system,Attempt:1,} returns sandbox id \"84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428\"" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.108 [INFO][4860] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0 coredns-7d764666f9- kube-system 9526e420-60fa-46fa-be79-ec21cb169333 969 0 2026-04-21 10:12:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 coredns-7d764666f9-mbg9w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali63c460eb984 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.108 [INFO][4860] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.133 [INFO][4880] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" HandleID="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.139 [INFO][4880] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" HandleID="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"coredns-7d764666f9-mbg9w", "timestamp":"2026-04-21 10:12:40.133116506 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000293340)} Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.139 [INFO][4880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.151 [INFO][4880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.151 [INFO][4880] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.228 [INFO][4880] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.234 [INFO][4880] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.239 [INFO][4880] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.241 [INFO][4880] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.243 [INFO][4880] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.243 [INFO][4880] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.244 [INFO][4880] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.247 [INFO][4880] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.253 [INFO][4880] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.71/26] block=192.168.65.64/26 handle="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.253 [INFO][4880] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.71/26] handle="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.253 [INFO][4880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:40.286720 containerd[1501]: 2026-04-21 10:12:40.253 [INFO][4880] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.71/26] IPv6=[] ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" HandleID="k8s-pod-network.7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.287134 containerd[1501]: 2026-04-21 10:12:40.257 [INFO][4860] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9526e420-60fa-46fa-be79-ec21cb169333", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"coredns-7d764666f9-mbg9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63c460eb984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:40.287134 containerd[1501]: 2026-04-21 10:12:40.258 [INFO][4860] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.71/32] ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.287134 containerd[1501]: 2026-04-21 10:12:40.258 [INFO][4860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63c460eb984 ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.287134 containerd[1501]: 2026-04-21 10:12:40.266 [INFO][4860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.287134 containerd[1501]: 2026-04-21 10:12:40.266 [INFO][4860] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9526e420-60fa-46fa-be79-ec21cb169333", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c", Pod:"coredns-7d764666f9-mbg9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63c460eb984", MAC:"02:10:bf:86:75:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:40.287542 containerd[1501]: 2026-04-21 10:12:40.279 [INFO][4860] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c" Namespace="kube-system" Pod="coredns-7d764666f9-mbg9w" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:40.305507 containerd[1501]: time="2026-04-21T10:12:40.305438956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:40.305620 containerd[1501]: time="2026-04-21T10:12:40.305517465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:40.305620 containerd[1501]: time="2026-04-21T10:12:40.305539798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:40.307642 containerd[1501]: time="2026-04-21T10:12:40.306176065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:40.330772 systemd[1]: Started cri-containerd-7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c.scope - libcontainer container 7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c. Apr 21 10:12:40.372077 containerd[1501]: time="2026-04-21T10:12:40.372041428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mbg9w,Uid:9526e420-60fa-46fa-be79-ec21cb169333,Namespace:kube-system,Attempt:1,} returns sandbox id \"7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c\"" Apr 21 10:12:40.377337 containerd[1501]: time="2026-04-21T10:12:40.377310551Z" level=info msg="CreateContainer within sandbox \"7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:12:40.389666 containerd[1501]: time="2026-04-21T10:12:40.389521752Z" level=info msg="CreateContainer within sandbox \"7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0448769c9cb8cfe2e29e3ee3148d479ee2440820d821bca4f7b98a1555d095a8\"" Apr 21 10:12:40.390395 containerd[1501]: time="2026-04-21T10:12:40.390374573Z" level=info msg="StartContainer for \"0448769c9cb8cfe2e29e3ee3148d479ee2440820d821bca4f7b98a1555d095a8\"" Apr 21 10:12:40.413761 systemd[1]: Started cri-containerd-0448769c9cb8cfe2e29e3ee3148d479ee2440820d821bca4f7b98a1555d095a8.scope - libcontainer container 0448769c9cb8cfe2e29e3ee3148d479ee2440820d821bca4f7b98a1555d095a8. Apr 21 10:12:40.437306 containerd[1501]: time="2026-04-21T10:12:40.437191546Z" level=info msg="StartContainer for \"0448769c9cb8cfe2e29e3ee3148d479ee2440820d821bca4f7b98a1555d095a8\" returns successfully" Apr 21 10:12:40.885939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415700382.mount: Deactivated successfully. Apr 21 10:12:40.897830 containerd[1501]: time="2026-04-21T10:12:40.897786107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:40.898538 containerd[1501]: time="2026-04-21T10:12:40.898502473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:12:40.899425 containerd[1501]: time="2026-04-21T10:12:40.899394695Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:40.901113 containerd[1501]: time="2026-04-21T10:12:40.901079748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:40.901915 containerd[1501]: time="2026-04-21T10:12:40.901519220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.000186052s" Apr 21 10:12:40.901915 containerd[1501]: time="2026-04-21T10:12:40.901543226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:12:40.904835 containerd[1501]: time="2026-04-21T10:12:40.904817679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:12:40.911554 containerd[1501]: time="2026-04-21T10:12:40.911534818Z" level=info msg="CreateContainer within sandbox \"f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:12:40.922501 containerd[1501]: time="2026-04-21T10:12:40.922470441Z" level=info msg="CreateContainer within sandbox \"f161a44ea5b35b52cf71c130bb420e55fb59bd823502f26ff1a114d1ab5325ea\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4d9735a8c309dc73410a47d403208bc281fbe692838e2fd55ba340dbdbf20ced\"" Apr 21 10:12:40.925960 containerd[1501]: time="2026-04-21T10:12:40.923904578Z" level=info msg="StartContainer for \"4d9735a8c309dc73410a47d403208bc281fbe692838e2fd55ba340dbdbf20ced\"" Apr 21 10:12:40.955738 systemd[1]: Started cri-containerd-4d9735a8c309dc73410a47d403208bc281fbe692838e2fd55ba340dbdbf20ced.scope - libcontainer container 4d9735a8c309dc73410a47d403208bc281fbe692838e2fd55ba340dbdbf20ced. Apr 21 10:12:40.992826 containerd[1501]: time="2026-04-21T10:12:40.992787819Z" level=info msg="StartContainer for \"4d9735a8c309dc73410a47d403208bc281fbe692838e2fd55ba340dbdbf20ced\" returns successfully" Apr 21 10:12:41.215798 kubelet[2557]: I0421 10:12:41.215582 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mbg9w" podStartSLOduration=37.21557144 podStartE2EDuration="37.21557144s" podCreationTimestamp="2026-04-21 10:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:12:41.212888125 +0000 UTC m=+44.372567386" watchObservedRunningTime="2026-04-21 10:12:41.21557144 +0000 UTC m=+44.375250710" Apr 21 10:12:41.247084 kubelet[2557]: I0421 10:12:41.246978 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-77cf95ddbc-d8nd9" podStartSLOduration=1.126290102 podStartE2EDuration="10.246959698s" podCreationTimestamp="2026-04-21 10:12:31 +0000 UTC" firstStartedPulling="2026-04-21 10:12:31.781579862 +0000 UTC m=+34.941259122" lastFinishedPulling="2026-04-21 10:12:40.902249458 +0000 UTC m=+44.061928718" observedRunningTime="2026-04-21 10:12:41.244796666 +0000 UTC m=+44.404475967" watchObservedRunningTime="2026-04-21 10:12:41.246959698 +0000 UTC m=+44.406638998" Apr 21 10:12:42.074931 systemd-networkd[1404]: calia75ad3feb12: Gained IPv6LL Apr 21 10:12:42.139750 systemd-networkd[1404]: cali63c460eb984: Gained IPv6LL Apr 21 10:12:43.477180 containerd[1501]: time="2026-04-21T10:12:43.477135626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:43.478546 containerd[1501]: time="2026-04-21T10:12:43.478358144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:12:43.479654 containerd[1501]: time="2026-04-21T10:12:43.479584417Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:43.481663 containerd[1501]: time="2026-04-21T10:12:43.481628760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:43.482472 containerd[1501]: time="2026-04-21T10:12:43.482070914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.577154267s" Apr 21 10:12:43.482472 containerd[1501]: time="2026-04-21T10:12:43.482099617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:12:43.499168 containerd[1501]: time="2026-04-21T10:12:43.499133219Z" level=info msg="CreateContainer within sandbox \"84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:12:43.513582 containerd[1501]: time="2026-04-21T10:12:43.513550589Z" level=info msg="CreateContainer within sandbox \"84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f8e93e215eb2fe78566e254cb63092a8d3ce8baf0b645c51b824dadf1d02a722\"" Apr 21 10:12:43.515337 containerd[1501]: time="2026-04-21T10:12:43.515306037Z" level=info msg="StartContainer for \"f8e93e215eb2fe78566e254cb63092a8d3ce8baf0b645c51b824dadf1d02a722\"" Apr 21 10:12:43.548789 systemd[1]: Started cri-containerd-f8e93e215eb2fe78566e254cb63092a8d3ce8baf0b645c51b824dadf1d02a722.scope - libcontainer container f8e93e215eb2fe78566e254cb63092a8d3ce8baf0b645c51b824dadf1d02a722. Apr 21 10:12:43.591656 containerd[1501]: time="2026-04-21T10:12:43.591580659Z" level=info msg="StartContainer for \"f8e93e215eb2fe78566e254cb63092a8d3ce8baf0b645c51b824dadf1d02a722\" returns successfully" Apr 21 10:12:43.928319 containerd[1501]: time="2026-04-21T10:12:43.928233746Z" level=info msg="StopPodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\"" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:43.996 [INFO][5170] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:43.996 [INFO][5170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" iface="eth0" netns="/var/run/netns/cni-8e8ee5b4-f3a1-f05a-842d-0ef02ed7ec45" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:43.998 [INFO][5170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" iface="eth0" netns="/var/run/netns/cni-8e8ee5b4-f3a1-f05a-842d-0ef02ed7ec45" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:43.998 [INFO][5170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" iface="eth0" netns="/var/run/netns/cni-8e8ee5b4-f3a1-f05a-842d-0ef02ed7ec45" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:43.998 [INFO][5170] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:43.998 [INFO][5170] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.019 [INFO][5177] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.019 [INFO][5177] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.019 [INFO][5177] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.024 [WARNING][5177] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.024 [INFO][5177] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.026 [INFO][5177] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:44.032883 containerd[1501]: 2026-04-21 10:12:44.029 [INFO][5170] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:44.033417 containerd[1501]: time="2026-04-21T10:12:44.033073177Z" level=info msg="TearDown network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" successfully" Apr 21 10:12:44.033417 containerd[1501]: time="2026-04-21T10:12:44.033096843Z" level=info msg="StopPodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" returns successfully" Apr 21 10:12:44.035858 containerd[1501]: time="2026-04-21T10:12:44.035817842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmvbc,Uid:27b58c18-a535-4675-97b1-656bf0345381,Namespace:calico-system,Attempt:1,}" Apr 21 10:12:44.135833 systemd-networkd[1404]: califb416de839e: Link UP Apr 21 10:12:44.136941 systemd-networkd[1404]: califb416de839e: Gained carrier Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.071 [INFO][5184] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0 csi-node-driver- calico-system 27b58c18-a535-4675-97b1-656bf0345381 1008 0 2026-04-21 10:12:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-7-16e5f88171 csi-node-driver-kmvbc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califb416de839e [] [] }} ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.072 [INFO][5184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.102 [INFO][5196] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" HandleID="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.109 [INFO][5196] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" HandleID="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-7-16e5f88171", "pod":"csi-node-driver-kmvbc", "timestamp":"2026-04-21 10:12:44.10288205 +0000 UTC"}, Hostname:"ci-4081-3-7-7-16e5f88171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000328dc0)} Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.109 [INFO][5196] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.109 [INFO][5196] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.109 [INFO][5196] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-7-16e5f88171' Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.111 [INFO][5196] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.114 [INFO][5196] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.117 [INFO][5196] ipam/ipam.go 526: Trying affinity for 192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.119 [INFO][5196] ipam/ipam.go 160: Attempting to load block cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.121 [INFO][5196] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.121 [INFO][5196] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.122 [INFO][5196] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.125 [INFO][5196] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.129 [INFO][5196] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.65.72/26] block=192.168.65.64/26 handle="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.129 [INFO][5196] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.65.72/26] handle="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" host="ci-4081-3-7-7-16e5f88171" Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.129 [INFO][5196] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:44.149914 containerd[1501]: 2026-04-21 10:12:44.129 [INFO][5196] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.65.72/26] IPv6=[] ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" HandleID="k8s-pod-network.f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.150959 containerd[1501]: 2026-04-21 10:12:44.132 [INFO][5184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27b58c18-a535-4675-97b1-656bf0345381", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"", Pod:"csi-node-driver-kmvbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb416de839e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:44.150959 containerd[1501]: 2026-04-21 10:12:44.132 [INFO][5184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.65.72/32] ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.150959 containerd[1501]: 2026-04-21 10:12:44.132 [INFO][5184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb416de839e ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.150959 containerd[1501]: 2026-04-21 10:12:44.135 [INFO][5184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.150959 containerd[1501]: 2026-04-21 10:12:44.136 [INFO][5184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27b58c18-a535-4675-97b1-656bf0345381", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad", Pod:"csi-node-driver-kmvbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb416de839e", MAC:"02:19:af:87:72:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:44.150959 containerd[1501]: 2026-04-21 10:12:44.147 [INFO][5184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad" Namespace="calico-system" Pod="csi-node-driver-kmvbc" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:44.171124 containerd[1501]: time="2026-04-21T10:12:44.171042375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:12:44.171686 containerd[1501]: time="2026-04-21T10:12:44.171443508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:12:44.171686 containerd[1501]: time="2026-04-21T10:12:44.171458821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:44.171686 containerd[1501]: time="2026-04-21T10:12:44.171631070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:12:44.190730 systemd[1]: Started cri-containerd-f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad.scope - libcontainer container f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad. Apr 21 10:12:44.216077 containerd[1501]: time="2026-04-21T10:12:44.215363113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kmvbc,Uid:27b58c18-a535-4675-97b1-656bf0345381,Namespace:calico-system,Attempt:1,} returns sandbox id \"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad\"" Apr 21 10:12:44.219085 containerd[1501]: time="2026-04-21T10:12:44.218905927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:12:44.230134 kubelet[2557]: I0421 10:12:44.229760 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-646984bf7c-rlbmt" podStartSLOduration=26.021073246 podStartE2EDuration="29.229749374s" podCreationTimestamp="2026-04-21 10:12:15 +0000 UTC" firstStartedPulling="2026-04-21 10:12:40.274066295 +0000 UTC m=+43.433745555" lastFinishedPulling="2026-04-21 10:12:43.482742413 +0000 UTC m=+46.642421683" observedRunningTime="2026-04-21 10:12:44.229109031 +0000 UTC m=+47.388788291" watchObservedRunningTime="2026-04-21 10:12:44.229749374 +0000 UTC m=+47.389428644" Apr 21 10:12:44.495333 systemd[1]: run-netns-cni\x2d8e8ee5b4\x2df3a1\x2df05a\x2d842d\x2d0ef02ed7ec45.mount: Deactivated successfully. Apr 21 10:12:45.658843 systemd-networkd[1404]: califb416de839e: Gained IPv6LL Apr 21 10:12:45.980372 containerd[1501]: time="2026-04-21T10:12:45.979581114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:45.980999 containerd[1501]: time="2026-04-21T10:12:45.980957453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:12:45.983640 containerd[1501]: time="2026-04-21T10:12:45.982845510Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:45.985891 containerd[1501]: time="2026-04-21T10:12:45.985859769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:45.986766 containerd[1501]: time="2026-04-21T10:12:45.986280993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.766316624s" Apr 21 10:12:45.986766 containerd[1501]: time="2026-04-21T10:12:45.986305760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:12:45.992422 containerd[1501]: time="2026-04-21T10:12:45.992399407Z" level=info msg="CreateContainer within sandbox \"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:12:46.010050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1263310503.mount: Deactivated successfully. Apr 21 10:12:46.010785 containerd[1501]: time="2026-04-21T10:12:46.010299634Z" level=info msg="CreateContainer within sandbox \"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0b2bb0b4819aad4d9cbab9906c5b7da9211874497fac9f14d2023ee4b8bf14cd\"" Apr 21 10:12:46.012731 containerd[1501]: time="2026-04-21T10:12:46.011961341Z" level=info msg="StartContainer for \"0b2bb0b4819aad4d9cbab9906c5b7da9211874497fac9f14d2023ee4b8bf14cd\"" Apr 21 10:12:46.039306 systemd[1]: run-containerd-runc-k8s.io-0b2bb0b4819aad4d9cbab9906c5b7da9211874497fac9f14d2023ee4b8bf14cd-runc.clUGiv.mount: Deactivated successfully. Apr 21 10:12:46.047762 systemd[1]: Started cri-containerd-0b2bb0b4819aad4d9cbab9906c5b7da9211874497fac9f14d2023ee4b8bf14cd.scope - libcontainer container 0b2bb0b4819aad4d9cbab9906c5b7da9211874497fac9f14d2023ee4b8bf14cd. Apr 21 10:12:46.073092 containerd[1501]: time="2026-04-21T10:12:46.072927216Z" level=info msg="StartContainer for \"0b2bb0b4819aad4d9cbab9906c5b7da9211874497fac9f14d2023ee4b8bf14cd\" returns successfully" Apr 21 10:12:46.075825 containerd[1501]: time="2026-04-21T10:12:46.075669727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:12:48.059593 containerd[1501]: time="2026-04-21T10:12:48.059538004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:48.060669 containerd[1501]: time="2026-04-21T10:12:48.060631587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:12:48.061691 containerd[1501]: time="2026-04-21T10:12:48.061650097Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:48.063442 containerd[1501]: time="2026-04-21T10:12:48.063401639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:12:48.063926 containerd[1501]: time="2026-04-21T10:12:48.063816542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.988126845s" Apr 21 10:12:48.063926 containerd[1501]: time="2026-04-21T10:12:48.063840668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:12:48.068336 containerd[1501]: time="2026-04-21T10:12:48.068287699Z" level=info msg="CreateContainer within sandbox \"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:12:48.081480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486673301.mount: Deactivated successfully. Apr 21 10:12:48.083720 containerd[1501]: time="2026-04-21T10:12:48.083691200Z" level=info msg="CreateContainer within sandbox \"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b1cbbfcb0ae1250c3e81333e5a94dbc05e8078b06ed65f0b0fb00225e1d46030\"" Apr 21 10:12:48.084540 containerd[1501]: time="2026-04-21T10:12:48.084072002Z" level=info msg="StartContainer for \"b1cbbfcb0ae1250c3e81333e5a94dbc05e8078b06ed65f0b0fb00225e1d46030\"" Apr 21 10:12:48.117770 systemd[1]: Started cri-containerd-b1cbbfcb0ae1250c3e81333e5a94dbc05e8078b06ed65f0b0fb00225e1d46030.scope - libcontainer container b1cbbfcb0ae1250c3e81333e5a94dbc05e8078b06ed65f0b0fb00225e1d46030. Apr 21 10:12:48.142973 containerd[1501]: time="2026-04-21T10:12:48.142930025Z" level=info msg="StartContainer for \"b1cbbfcb0ae1250c3e81333e5a94dbc05e8078b06ed65f0b0fb00225e1d46030\" returns successfully" Apr 21 10:12:48.242715 kubelet[2557]: I0421 10:12:48.242059 2557 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-kmvbc" podStartSLOduration=29.394938852 podStartE2EDuration="33.242046638s" podCreationTimestamp="2026-04-21 10:12:15 +0000 UTC" firstStartedPulling="2026-04-21 10:12:44.217667396 +0000 UTC m=+47.377346656" lastFinishedPulling="2026-04-21 10:12:48.064775182 +0000 UTC m=+51.224454442" observedRunningTime="2026-04-21 10:12:48.241435339 +0000 UTC m=+51.401114599" watchObservedRunningTime="2026-04-21 10:12:48.242046638 +0000 UTC m=+51.401725908" Apr 21 10:12:49.008680 kubelet[2557]: I0421 10:12:49.008452 2557 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:12:49.008680 kubelet[2557]: I0421 10:12:49.008494 2557 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:12:56.921153 containerd[1501]: time="2026-04-21T10:12:56.921047589Z" level=info msg="StopPodSandbox for \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\"" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.001 [WARNING][5400] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"86c26390-629c-4b75-b93f-04ad32bf827a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68", Pod:"calico-apiserver-55967dc964-whqhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie757c9da253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.001 [INFO][5400] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.001 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" iface="eth0" netns="" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.001 [INFO][5400] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.001 [INFO][5400] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.025 [INFO][5407] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.025 [INFO][5407] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.025 [INFO][5407] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.030 [WARNING][5407] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.030 [INFO][5407] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.031 [INFO][5407] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.036759 containerd[1501]: 2026-04-21 10:12:57.033 [INFO][5400] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.036759 containerd[1501]: time="2026-04-21T10:12:57.036759503Z" level=info msg="TearDown network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\" successfully" Apr 21 10:12:57.037419 containerd[1501]: time="2026-04-21T10:12:57.036780264Z" level=info msg="StopPodSandbox for \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\" returns successfully" Apr 21 10:12:57.037419 containerd[1501]: time="2026-04-21T10:12:57.037203570Z" level=info msg="RemovePodSandbox for \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\"" Apr 21 10:12:57.037419 containerd[1501]: time="2026-04-21T10:12:57.037223731Z" level=info msg="Forcibly stopping sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\"" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.067 [WARNING][5421] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"86c26390-629c-4b75-b93f-04ad32bf827a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"53c1ee18d1b2470b1709c56be7be3d733b661ad1603c4370ea5005d33482fc68", Pod:"calico-apiserver-55967dc964-whqhx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie757c9da253", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.067 [INFO][5421] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.067 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" iface="eth0" netns="" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.067 [INFO][5421] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.067 [INFO][5421] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.085 [INFO][5428] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.085 [INFO][5428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.085 [INFO][5428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.090 [WARNING][5428] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.090 [INFO][5428] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" HandleID="k8s-pod-network.e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--whqhx-eth0" Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.091 [INFO][5428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.096426 containerd[1501]: 2026-04-21 10:12:57.094 [INFO][5421] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e" Apr 21 10:12:57.097462 containerd[1501]: time="2026-04-21T10:12:57.096433211Z" level=info msg="TearDown network for sandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\" successfully" Apr 21 10:12:57.101471 containerd[1501]: time="2026-04-21T10:12:57.101414741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.101471 containerd[1501]: time="2026-04-21T10:12:57.101471736Z" level=info msg="RemovePodSandbox \"e9a066149b9bedee3ff8d1fc96b47482430425d6244627c13fe61d0ba3ca729e\" returns successfully" Apr 21 10:12:57.101908 containerd[1501]: time="2026-04-21T10:12:57.101883755Z" level=info msg="StopPodSandbox for \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\"" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.132 [WARNING][5442] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.132 [INFO][5442] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.132 [INFO][5442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" iface="eth0" netns="" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.132 [INFO][5442] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.132 [INFO][5442] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.148 [INFO][5450] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.148 [INFO][5450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.149 [INFO][5450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.158 [WARNING][5450] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.158 [INFO][5450] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.162 [INFO][5450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.168846 containerd[1501]: 2026-04-21 10:12:57.166 [INFO][5442] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.169271 containerd[1501]: time="2026-04-21T10:12:57.169177956Z" level=info msg="TearDown network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\" successfully" Apr 21 10:12:57.169328 containerd[1501]: time="2026-04-21T10:12:57.169318697Z" level=info msg="StopPodSandbox for \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\" returns successfully" Apr 21 10:12:57.169994 containerd[1501]: time="2026-04-21T10:12:57.169945879Z" level=info msg="RemovePodSandbox for \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\"" Apr 21 10:12:57.169994 containerd[1501]: time="2026-04-21T10:12:57.169969955Z" level=info msg="Forcibly stopping sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\"" Apr 21 10:12:57.184145 kubelet[2557]: I0421 10:12:57.183509 2557 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.208 [WARNING][5482] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" WorkloadEndpoint="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.209 [INFO][5482] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.209 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" iface="eth0" netns="" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.209 [INFO][5482] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.209 [INFO][5482] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.236 [INFO][5492] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.236 [INFO][5492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.236 [INFO][5492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.243 [WARNING][5492] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.243 [INFO][5492] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" HandleID="k8s-pod-network.cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Workload="ci--4081--3--7--7--16e5f88171-k8s-whisker--5dbf5b654b--jzpn9-eth0" Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.246 [INFO][5492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.253423 containerd[1501]: 2026-04-21 10:12:57.250 [INFO][5482] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14" Apr 21 10:12:57.254032 containerd[1501]: time="2026-04-21T10:12:57.253841485Z" level=info msg="TearDown network for sandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\" successfully" Apr 21 10:12:57.260636 containerd[1501]: time="2026-04-21T10:12:57.260345962Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.260636 containerd[1501]: time="2026-04-21T10:12:57.260382586Z" level=info msg="RemovePodSandbox \"cc179d4a0c223fd25fd39733178b1db95f2b9a8e4836a2723e58dad68b4fff14\" returns successfully" Apr 21 10:12:57.261236 containerd[1501]: time="2026-04-21T10:12:57.261221346Z" level=info msg="StopPodSandbox for \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\"" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.307 [WARNING][5510] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f53cdfb7-2232-4081-b646-6f2c33fb2ce3", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4", Pod:"goldmane-9f7667bb8-2q75p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12abec3bd27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.307 [INFO][5510] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.307 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" iface="eth0" netns="" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.308 [INFO][5510] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.308 [INFO][5510] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.329 [INFO][5518] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.329 [INFO][5518] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.329 [INFO][5518] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.334 [WARNING][5518] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.334 [INFO][5518] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.335 [INFO][5518] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.338839 containerd[1501]: 2026-04-21 10:12:57.337 [INFO][5510] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.339421 containerd[1501]: time="2026-04-21T10:12:57.339045968Z" level=info msg="TearDown network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\" successfully" Apr 21 10:12:57.339421 containerd[1501]: time="2026-04-21T10:12:57.339066980Z" level=info msg="StopPodSandbox for \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\" returns successfully" Apr 21 10:12:57.339517 containerd[1501]: time="2026-04-21T10:12:57.339448592Z" level=info msg="RemovePodSandbox for \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\"" Apr 21 10:12:57.339517 containerd[1501]: time="2026-04-21T10:12:57.339474050Z" level=info msg="Forcibly stopping sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\"" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.365 [WARNING][5532] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f53cdfb7-2232-4081-b646-6f2c33fb2ce3", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"c24619657538097eda496531cdaa5a08f6d85f9016460623d7005acf417499e4", Pod:"goldmane-9f7667bb8-2q75p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12abec3bd27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.365 [INFO][5532] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.365 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" iface="eth0" netns="" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.366 [INFO][5532] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.366 [INFO][5532] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.383 [INFO][5539] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.383 [INFO][5539] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.383 [INFO][5539] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.387 [WARNING][5539] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.387 [INFO][5539] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" HandleID="k8s-pod-network.47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Workload="ci--4081--3--7--7--16e5f88171-k8s-goldmane--9f7667bb8--2q75p-eth0" Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.388 [INFO][5539] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.392151 containerd[1501]: 2026-04-21 10:12:57.390 [INFO][5532] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665" Apr 21 10:12:57.392567 containerd[1501]: time="2026-04-21T10:12:57.392124284Z" level=info msg="TearDown network for sandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\" successfully" Apr 21 10:12:57.396181 containerd[1501]: time="2026-04-21T10:12:57.396156548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.396245 containerd[1501]: time="2026-04-21T10:12:57.396217890Z" level=info msg="RemovePodSandbox \"47cc707afc6e4a44853a63dc24ae5fc8ab62b99f2bf59c46ac4e6eee1408c665\" returns successfully" Apr 21 10:12:57.396854 containerd[1501]: time="2026-04-21T10:12:57.396623368Z" level=info msg="StopPodSandbox for \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\"" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.421 [WARNING][5553] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"58cab4f0-914e-4b34-bf08-20af1325a859", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337", Pod:"calico-apiserver-55967dc964-cmfmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d27a5c4d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.421 [INFO][5553] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.421 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" iface="eth0" netns="" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.421 [INFO][5553] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.421 [INFO][5553] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.440 [INFO][5560] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.440 [INFO][5560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.440 [INFO][5560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.444 [WARNING][5560] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.444 [INFO][5560] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.445 [INFO][5560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.449106 containerd[1501]: 2026-04-21 10:12:57.447 [INFO][5553] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.449106 containerd[1501]: time="2026-04-21T10:12:57.448936435Z" level=info msg="TearDown network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\" successfully" Apr 21 10:12:57.449106 containerd[1501]: time="2026-04-21T10:12:57.448957898Z" level=info msg="StopPodSandbox for \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\" returns successfully" Apr 21 10:12:57.450198 containerd[1501]: time="2026-04-21T10:12:57.450170368Z" level=info msg="RemovePodSandbox for \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\"" Apr 21 10:12:57.450258 containerd[1501]: time="2026-04-21T10:12:57.450204109Z" level=info msg="Forcibly stopping sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\"" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.477 [WARNING][5575] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0", GenerateName:"calico-apiserver-55967dc964-", Namespace:"calico-system", SelfLink:"", UID:"58cab4f0-914e-4b34-bf08-20af1325a859", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55967dc964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"e48137fef1e5792428a6226808e69e9842b02e41e8b06d536889b5bc5d656337", Pod:"calico-apiserver-55967dc964-cmfmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3d27a5c4d08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.477 [INFO][5575] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.477 [INFO][5575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" iface="eth0" netns="" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.477 [INFO][5575] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.477 [INFO][5575] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.492 [INFO][5582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.492 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.492 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.497 [WARNING][5582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.497 [INFO][5582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" HandleID="k8s-pod-network.ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--apiserver--55967dc964--cmfmc-eth0" Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.498 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.501517 containerd[1501]: 2026-04-21 10:12:57.499 [INFO][5575] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e" Apr 21 10:12:57.501861 containerd[1501]: time="2026-04-21T10:12:57.501547659Z" level=info msg="TearDown network for sandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\" successfully" Apr 21 10:12:57.505644 containerd[1501]: time="2026-04-21T10:12:57.505604812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.505726 containerd[1501]: time="2026-04-21T10:12:57.505668998Z" level=info msg="RemovePodSandbox \"ec17b85da1dc8863de7c96ab186a04410ca63d31fc739235ba02163ecd75c51e\" returns successfully" Apr 21 10:12:57.506286 containerd[1501]: time="2026-04-21T10:12:57.506042679Z" level=info msg="StopPodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\"" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.530 [WARNING][5596] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0", GenerateName:"calico-kube-controllers-646984bf7c-", Namespace:"calico-system", SelfLink:"", UID:"f6975e89-7bfc-494e-b59a-cbe9de15cd69", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646984bf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428", Pod:"calico-kube-controllers-646984bf7c-rlbmt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia75ad3feb12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.530 [INFO][5596] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.530 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" iface="eth0" netns="" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.530 [INFO][5596] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.530 [INFO][5596] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.545 [INFO][5603] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.545 [INFO][5603] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.545 [INFO][5603] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.549 [WARNING][5603] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.549 [INFO][5603] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.550 [INFO][5603] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.554319 containerd[1501]: 2026-04-21 10:12:57.552 [INFO][5596] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.554870 containerd[1501]: time="2026-04-21T10:12:57.554349660Z" level=info msg="TearDown network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" successfully" Apr 21 10:12:57.554870 containerd[1501]: time="2026-04-21T10:12:57.554370632Z" level=info msg="StopPodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" returns successfully" Apr 21 10:12:57.554870 containerd[1501]: time="2026-04-21T10:12:57.554822941Z" level=info msg="RemovePodSandbox for \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\"" Apr 21 10:12:57.554870 containerd[1501]: time="2026-04-21T10:12:57.554841649Z" level=info msg="Forcibly stopping sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\"" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.579 [WARNING][5617] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0", GenerateName:"calico-kube-controllers-646984bf7c-", Namespace:"calico-system", SelfLink:"", UID:"f6975e89-7bfc-494e-b59a-cbe9de15cd69", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646984bf7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"84a343715f769762b739db31fa61a877f86109658d86c1ee26901b4e5e9f6428", Pod:"calico-kube-controllers-646984bf7c-rlbmt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia75ad3feb12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.580 [INFO][5617] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.580 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" iface="eth0" netns="" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.580 [INFO][5617] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.580 [INFO][5617] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.597 [INFO][5624] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.597 [INFO][5624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.597 [INFO][5624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.601 [WARNING][5624] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.602 [INFO][5624] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" HandleID="k8s-pod-network.7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Workload="ci--4081--3--7--7--16e5f88171-k8s-calico--kube--controllers--646984bf7c--rlbmt-eth0" Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.603 [INFO][5624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.607013 containerd[1501]: 2026-04-21 10:12:57.605 [INFO][5617] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288" Apr 21 10:12:57.607346 containerd[1501]: time="2026-04-21T10:12:57.607022167Z" level=info msg="TearDown network for sandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" successfully" Apr 21 10:12:57.610740 containerd[1501]: time="2026-04-21T10:12:57.610713549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.610826 containerd[1501]: time="2026-04-21T10:12:57.610765157Z" level=info msg="RemovePodSandbox \"7e530f33a27ab744dc40b4fe57623ef237a77ce0cfe292824bf587ed4a20f288\" returns successfully" Apr 21 10:12:57.611204 containerd[1501]: time="2026-04-21T10:12:57.611175823Z" level=info msg="StopPodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\"" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.636 [WARNING][5638] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27b58c18-a535-4675-97b1-656bf0345381", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad", Pod:"csi-node-driver-kmvbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb416de839e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.636 [INFO][5638] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.636 [INFO][5638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" iface="eth0" netns="" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.636 [INFO][5638] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.636 [INFO][5638] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.652 [INFO][5645] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.652 [INFO][5645] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.652 [INFO][5645] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.656 [WARNING][5645] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.656 [INFO][5645] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.657 [INFO][5645] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.661526 containerd[1501]: 2026-04-21 10:12:57.659 [INFO][5638] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.661868 containerd[1501]: time="2026-04-21T10:12:57.661558000Z" level=info msg="TearDown network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" successfully" Apr 21 10:12:57.661868 containerd[1501]: time="2026-04-21T10:12:57.661579664Z" level=info msg="StopPodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" returns successfully" Apr 21 10:12:57.662282 containerd[1501]: time="2026-04-21T10:12:57.662040034Z" level=info msg="RemovePodSandbox for \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\"" Apr 21 10:12:57.662282 containerd[1501]: time="2026-04-21T10:12:57.662066053Z" level=info msg="Forcibly stopping sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\"" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.688 [WARNING][5660] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"27b58c18-a535-4675-97b1-656bf0345381", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"f243e33517f703055500cf1f93fc876938bcc1b4a8a081b948ae41dc2ec2e2ad", Pod:"csi-node-driver-kmvbc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califb416de839e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.688 [INFO][5660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.688 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" iface="eth0" netns="" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.688 [INFO][5660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.688 [INFO][5660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.703 [INFO][5667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.703 [INFO][5667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.703 [INFO][5667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.707 [WARNING][5667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.708 [INFO][5667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" HandleID="k8s-pod-network.959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Workload="ci--4081--3--7--7--16e5f88171-k8s-csi--node--driver--kmvbc-eth0" Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.709 [INFO][5667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.712732 containerd[1501]: 2026-04-21 10:12:57.710 [INFO][5660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e" Apr 21 10:12:57.712732 containerd[1501]: time="2026-04-21T10:12:57.712699408Z" level=info msg="TearDown network for sandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" successfully" Apr 21 10:12:57.717216 containerd[1501]: time="2026-04-21T10:12:57.717163420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.717376 containerd[1501]: time="2026-04-21T10:12:57.717224612Z" level=info msg="RemovePodSandbox \"959f7d6f4a10ed3a0d936523eefc8485e3e1a38c28643b23ec62bc5ab3b6de9e\" returns successfully" Apr 21 10:12:57.717715 containerd[1501]: time="2026-04-21T10:12:57.717688619Z" level=info msg="StopPodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\"" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.743 [WARNING][5682] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9526e420-60fa-46fa-be79-ec21cb169333", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c", Pod:"coredns-7d764666f9-mbg9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63c460eb984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.743 [INFO][5682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.743 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" iface="eth0" netns="" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.743 [INFO][5682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.743 [INFO][5682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.760 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.760 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.760 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.764 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.764 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.765 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.768963 containerd[1501]: 2026-04-21 10:12:57.767 [INFO][5682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.769296 containerd[1501]: time="2026-04-21T10:12:57.769012661Z" level=info msg="TearDown network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" successfully" Apr 21 10:12:57.769296 containerd[1501]: time="2026-04-21T10:12:57.769033071Z" level=info msg="StopPodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" returns successfully" Apr 21 10:12:57.769687 containerd[1501]: time="2026-04-21T10:12:57.769641065Z" level=info msg="RemovePodSandbox for \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\"" Apr 21 10:12:57.769687 containerd[1501]: time="2026-04-21T10:12:57.769663438Z" level=info msg="Forcibly stopping sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\"" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.800 [WARNING][5704] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"9526e420-60fa-46fa-be79-ec21cb169333", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"7f36f4f20987666ee50035281631ffbb79858181f1b0873d22ade9a0ee3e508c", Pod:"coredns-7d764666f9-mbg9w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali63c460eb984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.800 [INFO][5704] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.800 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" iface="eth0" netns="" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.800 [INFO][5704] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.800 [INFO][5704] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.815 [INFO][5712] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.815 [INFO][5712] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.816 [INFO][5712] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.820 [WARNING][5712] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.820 [INFO][5712] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" HandleID="k8s-pod-network.7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--mbg9w-eth0" Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.821 [INFO][5712] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.824901 containerd[1501]: 2026-04-21 10:12:57.822 [INFO][5704] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6" Apr 21 10:12:57.824901 containerd[1501]: time="2026-04-21T10:12:57.824812823Z" level=info msg="TearDown network for sandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" successfully" Apr 21 10:12:57.829106 containerd[1501]: time="2026-04-21T10:12:57.829085128Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.829225 containerd[1501]: time="2026-04-21T10:12:57.829206981Z" level=info msg="RemovePodSandbox \"7451078170f08742287363b9eb431bb2edbe15a9a8c3dc4be27bca11c51da9b6\" returns successfully" Apr 21 10:12:57.829630 containerd[1501]: time="2026-04-21T10:12:57.829595925Z" level=info msg="StopPodSandbox for \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\"" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.854 [WARNING][5727] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37", Pod:"coredns-7d764666f9-cxwzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25023fffc23", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.854 [INFO][5727] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.854 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" iface="eth0" netns="" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.854 [INFO][5727] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.854 [INFO][5727] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.868 [INFO][5734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.869 [INFO][5734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.869 [INFO][5734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.872 [WARNING][5734] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.873 [INFO][5734] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.873 [INFO][5734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.877668 containerd[1501]: 2026-04-21 10:12:57.875 [INFO][5727] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.878341 containerd[1501]: time="2026-04-21T10:12:57.877962516Z" level=info msg="TearDown network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\" successfully" Apr 21 10:12:57.878341 containerd[1501]: time="2026-04-21T10:12:57.877984869Z" level=info msg="StopPodSandbox for \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\" returns successfully" Apr 21 10:12:57.878704 containerd[1501]: time="2026-04-21T10:12:57.878465672Z" level=info msg="RemovePodSandbox for \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\"" Apr 21 10:12:57.878704 containerd[1501]: time="2026-04-21T10:12:57.878484640Z" level=info msg="Forcibly stopping sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\"" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.902 [WARNING][5748] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"1dab33d7-a2e6-4c4b-8b0e-6b44c82e72e2", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 12, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-7-16e5f88171", ContainerID:"fe11f6dbc7e164318149dd89a52a86798ffb8a42debe5a2c590ae9a4b8c02b37", Pod:"coredns-7d764666f9-cxwzz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali25023fffc23", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.902 [INFO][5748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.902 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" iface="eth0" netns="" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.902 [INFO][5748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.902 [INFO][5748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.919 [INFO][5756] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.919 [INFO][5756] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.919 [INFO][5756] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.925 [WARNING][5756] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.925 [INFO][5756] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" HandleID="k8s-pod-network.0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Workload="ci--4081--3--7--7--16e5f88171-k8s-coredns--7d764666f9--cxwzz-eth0" Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.926 [INFO][5756] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:12:57.930553 containerd[1501]: 2026-04-21 10:12:57.928 [INFO][5748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41" Apr 21 10:12:57.931118 containerd[1501]: time="2026-04-21T10:12:57.930603535Z" level=info msg="TearDown network for sandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\" successfully" Apr 21 10:12:57.934376 containerd[1501]: time="2026-04-21T10:12:57.934341608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:12:57.934417 containerd[1501]: time="2026-04-21T10:12:57.934395559Z" level=info msg="RemovePodSandbox \"0337db9de196f8bd9d6d33f6ae2bdbdf1f0580dc242e0e29c0bd5b3c6785ba41\" returns successfully" Apr 21 10:13:04.010827 systemd[1]: Started sshd@7-37.27.23.25:22-50.85.169.122:40900.service - OpenSSH per-connection server daemon (50.85.169.122:40900). Apr 21 10:13:04.237387 sshd[5797]: Accepted publickey for core from 50.85.169.122 port 40900 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:04.240225 sshd[5797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:04.246316 systemd-logind[1486]: New session 8 of user core. Apr 21 10:13:04.250733 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:13:04.503753 sshd[5797]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:04.507929 systemd[1]: sshd@7-37.27.23.25:22-50.85.169.122:40900.service: Deactivated successfully. Apr 21 10:13:04.508002 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:13:04.510272 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:13:04.511422 systemd-logind[1486]: Removed session 8. Apr 21 10:13:09.555468 systemd[1]: Started sshd@8-37.27.23.25:22-50.85.169.122:57756.service - OpenSSH per-connection server daemon (50.85.169.122:57756). Apr 21 10:13:09.780301 sshd[5843]: Accepted publickey for core from 50.85.169.122 port 57756 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:09.782748 sshd[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:09.790253 systemd-logind[1486]: New session 9 of user core. Apr 21 10:13:09.799848 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:13:10.004452 sshd[5843]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:10.007846 systemd[1]: sshd@8-37.27.23.25:22-50.85.169.122:57756.service: Deactivated successfully. Apr 21 10:13:10.010004 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:13:10.011467 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:13:10.013131 systemd-logind[1486]: Removed session 9. Apr 21 10:13:15.056016 systemd[1]: Started sshd@9-37.27.23.25:22-50.85.169.122:57768.service - OpenSSH per-connection server daemon (50.85.169.122:57768). Apr 21 10:13:15.274412 sshd[5908]: Accepted publickey for core from 50.85.169.122 port 57768 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:15.278972 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:15.288554 systemd-logind[1486]: New session 10 of user core. Apr 21 10:13:15.294753 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:13:15.525983 sshd[5908]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:15.529682 systemd[1]: sshd@9-37.27.23.25:22-50.85.169.122:57768.service: Deactivated successfully. Apr 21 10:13:15.531472 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:13:15.533668 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:13:15.535103 systemd-logind[1486]: Removed session 10. Apr 21 10:13:19.892824 kubelet[2557]: I0421 10:13:19.892343 2557 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:13:20.578052 systemd[1]: Started sshd@10-37.27.23.25:22-50.85.169.122:53438.service - OpenSSH per-connection server daemon (50.85.169.122:53438). Apr 21 10:13:20.820881 sshd[5949]: Accepted publickey for core from 50.85.169.122 port 53438 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:20.824708 sshd[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:20.833256 systemd-logind[1486]: New session 11 of user core. Apr 21 10:13:20.836878 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:13:21.096314 sshd[5949]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:21.098881 systemd[1]: sshd@10-37.27.23.25:22-50.85.169.122:53438.service: Deactivated successfully. Apr 21 10:13:21.101043 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:13:21.102372 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:13:21.103278 systemd-logind[1486]: Removed session 11. Apr 21 10:13:21.135652 systemd[1]: Started sshd@11-37.27.23.25:22-50.85.169.122:53446.service - OpenSSH per-connection server daemon (50.85.169.122:53446). Apr 21 10:13:21.357103 sshd[5963]: Accepted publickey for core from 50.85.169.122 port 53446 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:21.359921 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:21.367740 systemd-logind[1486]: New session 12 of user core. Apr 21 10:13:21.376968 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:13:21.645345 sshd[5963]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:21.649144 systemd[1]: sshd@11-37.27.23.25:22-50.85.169.122:53446.service: Deactivated successfully. Apr 21 10:13:21.651289 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:13:21.652908 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:13:21.654117 systemd-logind[1486]: Removed session 12. Apr 21 10:13:21.682943 systemd[1]: Started sshd@12-37.27.23.25:22-50.85.169.122:53458.service - OpenSSH per-connection server daemon (50.85.169.122:53458). Apr 21 10:13:21.893725 sshd[5975]: Accepted publickey for core from 50.85.169.122 port 53458 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:21.896874 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:21.905187 systemd-logind[1486]: New session 13 of user core. Apr 21 10:13:21.910876 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:13:22.147792 sshd[5975]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:22.151457 systemd[1]: sshd@12-37.27.23.25:22-50.85.169.122:53458.service: Deactivated successfully. Apr 21 10:13:22.153241 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:13:22.155270 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:13:22.156789 systemd-logind[1486]: Removed session 13. Apr 21 10:13:27.199045 systemd[1]: Started sshd@13-37.27.23.25:22-50.85.169.122:53462.service - OpenSSH per-connection server daemon (50.85.169.122:53462). Apr 21 10:13:27.418524 sshd[5994]: Accepted publickey for core from 50.85.169.122 port 53462 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:27.419178 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:27.429350 systemd-logind[1486]: New session 14 of user core. Apr 21 10:13:27.436026 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:13:27.667744 sshd[5994]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:27.671846 systemd[1]: sshd@13-37.27.23.25:22-50.85.169.122:53462.service: Deactivated successfully. Apr 21 10:13:27.675604 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:13:27.677588 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:13:27.678785 systemd-logind[1486]: Removed session 14. Apr 21 10:13:32.715044 systemd[1]: Started sshd@14-37.27.23.25:22-50.85.169.122:58500.service - OpenSSH per-connection server daemon (50.85.169.122:58500). Apr 21 10:13:32.934166 sshd[6029]: Accepted publickey for core from 50.85.169.122 port 58500 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:32.938023 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:32.946161 systemd-logind[1486]: New session 15 of user core. Apr 21 10:13:32.952005 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:13:33.206569 sshd[6029]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:33.209485 systemd[1]: sshd@14-37.27.23.25:22-50.85.169.122:58500.service: Deactivated successfully. Apr 21 10:13:33.211179 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:13:33.212377 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:13:33.213453 systemd-logind[1486]: Removed session 15. Apr 21 10:13:33.251959 systemd[1]: Started sshd@15-37.27.23.25:22-50.85.169.122:58510.service - OpenSSH per-connection server daemon (50.85.169.122:58510). Apr 21 10:13:33.457975 sshd[6042]: Accepted publickey for core from 50.85.169.122 port 58510 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:33.461016 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:33.467411 systemd-logind[1486]: New session 16 of user core. Apr 21 10:13:33.476799 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:13:33.941571 sshd[6042]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:33.946908 systemd[1]: sshd@15-37.27.23.25:22-50.85.169.122:58510.service: Deactivated successfully. Apr 21 10:13:33.951749 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:13:33.953140 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:13:33.954509 systemd-logind[1486]: Removed session 16. Apr 21 10:13:33.991099 systemd[1]: Started sshd@16-37.27.23.25:22-50.85.169.122:58524.service - OpenSSH per-connection server daemon (50.85.169.122:58524). Apr 21 10:13:34.204877 sshd[6054]: Accepted publickey for core from 50.85.169.122 port 58524 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:34.207705 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:34.212958 systemd-logind[1486]: New session 17 of user core. Apr 21 10:13:34.221867 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:13:35.100426 sshd[6054]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:35.103169 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:13:35.105062 systemd[1]: sshd@16-37.27.23.25:22-50.85.169.122:58524.service: Deactivated successfully. Apr 21 10:13:35.107381 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:13:35.109320 systemd-logind[1486]: Removed session 17. Apr 21 10:13:35.140791 systemd[1]: Started sshd@17-37.27.23.25:22-50.85.169.122:58534.service - OpenSSH per-connection server daemon (50.85.169.122:58534). Apr 21 10:13:35.346013 sshd[6082]: Accepted publickey for core from 50.85.169.122 port 58534 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:35.348955 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:35.357765 systemd-logind[1486]: New session 18 of user core. Apr 21 10:13:35.366859 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:13:35.664917 sshd[6082]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:35.667718 systemd[1]: sshd@17-37.27.23.25:22-50.85.169.122:58534.service: Deactivated successfully. Apr 21 10:13:35.669941 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:13:35.671419 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:13:35.672221 systemd-logind[1486]: Removed session 18. Apr 21 10:13:35.704264 systemd[1]: Started sshd@18-37.27.23.25:22-50.85.169.122:58548.service - OpenSSH per-connection server daemon (50.85.169.122:58548). Apr 21 10:13:35.924318 sshd[6093]: Accepted publickey for core from 50.85.169.122 port 58548 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:35.929253 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:35.938068 systemd-logind[1486]: New session 19 of user core. Apr 21 10:13:35.941831 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:13:36.152313 sshd[6093]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:36.157378 systemd[1]: sshd@18-37.27.23.25:22-50.85.169.122:58548.service: Deactivated successfully. Apr 21 10:13:36.161252 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:13:36.162187 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:13:36.163290 systemd-logind[1486]: Removed session 19. Apr 21 10:13:41.204162 systemd[1]: Started sshd@19-37.27.23.25:22-50.85.169.122:34828.service - OpenSSH per-connection server daemon (50.85.169.122:34828). Apr 21 10:13:41.434321 sshd[6130]: Accepted publickey for core from 50.85.169.122 port 34828 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:41.437943 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:41.446539 systemd-logind[1486]: New session 20 of user core. Apr 21 10:13:41.451921 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:13:41.678239 sshd[6130]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:41.688427 systemd[1]: sshd@19-37.27.23.25:22-50.85.169.122:34828.service: Deactivated successfully. Apr 21 10:13:41.693976 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:13:41.695527 systemd-logind[1486]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:13:41.698012 systemd-logind[1486]: Removed session 20. Apr 21 10:13:46.732022 systemd[1]: Started sshd@20-37.27.23.25:22-50.85.169.122:34840.service - OpenSSH per-connection server daemon (50.85.169.122:34840). Apr 21 10:13:46.958565 sshd[6163]: Accepted publickey for core from 50.85.169.122 port 34840 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:13:46.961692 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:13:46.970333 systemd-logind[1486]: New session 21 of user core. Apr 21 10:13:46.975874 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:13:47.206563 sshd[6163]: pam_unix(sshd:session): session closed for user core Apr 21 10:13:47.210652 systemd[1]: sshd@20-37.27.23.25:22-50.85.169.122:34840.service: Deactivated successfully. Apr 21 10:13:47.214520 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:13:47.217529 systemd-logind[1486]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:13:47.218858 systemd-logind[1486]: Removed session 21. Apr 21 10:14:03.515988 systemd[1]: cri-containerd-22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b.scope: Deactivated successfully. Apr 21 10:14:03.517231 systemd[1]: cri-containerd-22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b.scope: Consumed 5.182s CPU time. Apr 21 10:14:03.539269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b-rootfs.mount: Deactivated successfully. Apr 21 10:14:03.539957 containerd[1501]: time="2026-04-21T10:14:03.539526584Z" level=info msg="shim disconnected" id=22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b namespace=k8s.io Apr 21 10:14:03.539957 containerd[1501]: time="2026-04-21T10:14:03.539605633Z" level=warning msg="cleaning up after shim disconnected" id=22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b namespace=k8s.io Apr 21 10:14:03.539957 containerd[1501]: time="2026-04-21T10:14:03.539648819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:03.556005 containerd[1501]: time="2026-04-21T10:14:03.555957126Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:14:03Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:14:03.934191 kubelet[2557]: E0421 10:14:03.933915 2557 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38588->10.0.0.2:2379: read: connection timed out" Apr 21 10:14:04.335259 systemd[1]: cri-containerd-f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021.scope: Deactivated successfully. Apr 21 10:14:04.335838 systemd[1]: cri-containerd-f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021.scope: Consumed 2.827s CPU time, 17.7M memory peak, 0B memory swap peak. Apr 21 10:14:04.376413 containerd[1501]: time="2026-04-21T10:14:04.376235351Z" level=info msg="shim disconnected" id=f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021 namespace=k8s.io Apr 21 10:14:04.377801 containerd[1501]: time="2026-04-21T10:14:04.376591904Z" level=warning msg="cleaning up after shim disconnected" id=f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021 namespace=k8s.io Apr 21 10:14:04.378716 containerd[1501]: time="2026-04-21T10:14:04.378674122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:04.382345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021-rootfs.mount: Deactivated successfully. Apr 21 10:14:04.473927 kubelet[2557]: I0421 10:14:04.472711 2557 scope.go:122] "RemoveContainer" containerID="f9174562b2c76b790e96b219dbd271d1af526dfc561b10b4d7cc4115bdcf0021" Apr 21 10:14:04.480192 containerd[1501]: time="2026-04-21T10:14:04.479892514Z" level=info msg="CreateContainer within sandbox \"1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:14:04.481145 kubelet[2557]: I0421 10:14:04.480830 2557 scope.go:122] "RemoveContainer" containerID="22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b" Apr 21 10:14:04.486353 containerd[1501]: time="2026-04-21T10:14:04.486261557Z" level=info msg="CreateContainer within sandbox \"6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:14:04.520643 containerd[1501]: time="2026-04-21T10:14:04.517105817Z" level=info msg="CreateContainer within sandbox \"1aeba63ce6da760f64f45d61235996cea981ef88b1863772cd1334bc6d666294\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"eb174a672dd7b72703a363b0cd221257469b3fdabb0102592837af681dc989ad\"" Apr 21 10:14:04.520643 containerd[1501]: time="2026-04-21T10:14:04.519962125Z" level=info msg="StartContainer for \"eb174a672dd7b72703a363b0cd221257469b3fdabb0102592837af681dc989ad\"" Apr 21 10:14:04.521036 containerd[1501]: time="2026-04-21T10:14:04.520954515Z" level=info msg="CreateContainer within sandbox \"6857d601cf82eae9a3429b028ded4724cf99a1a2e03aa3caf86c5e00f3758beb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5\"" Apr 21 10:14:04.521841 containerd[1501]: time="2026-04-21T10:14:04.521266269Z" level=info msg="StartContainer for \"a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5\"" Apr 21 10:14:04.569759 systemd[1]: Started cri-containerd-a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5.scope - libcontainer container a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5. Apr 21 10:14:04.571436 systemd[1]: Started cri-containerd-eb174a672dd7b72703a363b0cd221257469b3fdabb0102592837af681dc989ad.scope - libcontainer container eb174a672dd7b72703a363b0cd221257469b3fdabb0102592837af681dc989ad. Apr 21 10:14:04.606214 containerd[1501]: time="2026-04-21T10:14:04.606184704Z" level=info msg="StartContainer for \"a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5\" returns successfully" Apr 21 10:14:04.618262 containerd[1501]: time="2026-04-21T10:14:04.616828212Z" level=info msg="StartContainer for \"eb174a672dd7b72703a363b0cd221257469b3fdabb0102592837af681dc989ad\" returns successfully" Apr 21 10:14:07.839091 kubelet[2557]: E0421 10:14:07.836083 2557 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38218->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-7-16e5f88171.18a857a93d27af25 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-7-16e5f88171,UID:a744c92ced63e68c2611b8b2cf47573f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-7-16e5f88171,},FirstTimestamp:2026-04-21 10:13:57.384761125 +0000 UTC m=+120.544440425,LastTimestamp:2026-04-21 10:13:57.384761125 +0000 UTC m=+120.544440425,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-7-16e5f88171,}" Apr 21 10:14:09.709818 systemd[1]: cri-containerd-572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b.scope: Deactivated successfully. Apr 21 10:14:09.710902 systemd[1]: cri-containerd-572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b.scope: Consumed 1.280s CPU time, 15.8M memory peak, 0B memory swap peak. Apr 21 10:14:09.753175 containerd[1501]: time="2026-04-21T10:14:09.753098723Z" level=info msg="shim disconnected" id=572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b namespace=k8s.io Apr 21 10:14:09.753175 containerd[1501]: time="2026-04-21T10:14:09.753170141Z" level=warning msg="cleaning up after shim disconnected" id=572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b namespace=k8s.io Apr 21 10:14:09.754304 containerd[1501]: time="2026-04-21T10:14:09.753187859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:09.762247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b-rootfs.mount: Deactivated successfully. Apr 21 10:14:10.498124 kubelet[2557]: I0421 10:14:10.498078 2557 scope.go:122] "RemoveContainer" containerID="572311fbd8dda07ecb7209e65bd19b89ad1d438bd8dd95124606d05e3387974b" Apr 21 10:14:10.499967 containerd[1501]: time="2026-04-21T10:14:10.499915166Z" level=info msg="CreateContainer within sandbox \"967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:14:10.517400 containerd[1501]: time="2026-04-21T10:14:10.515972574Z" level=info msg="CreateContainer within sandbox \"967e84a1cb65748e5f4b10660499aee97dfbced361e9ca6c80c416c69c229021\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7e46183aca23c5c321b77168f0efa4bb783da5e5f0fb3780b541aa178f5b77bf\"" Apr 21 10:14:10.517400 containerd[1501]: time="2026-04-21T10:14:10.516448066Z" level=info msg="StartContainer for \"7e46183aca23c5c321b77168f0efa4bb783da5e5f0fb3780b541aa178f5b77bf\"" Apr 21 10:14:10.516182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2464600873.mount: Deactivated successfully. Apr 21 10:14:10.546755 systemd[1]: Started cri-containerd-7e46183aca23c5c321b77168f0efa4bb783da5e5f0fb3780b541aa178f5b77bf.scope - libcontainer container 7e46183aca23c5c321b77168f0efa4bb783da5e5f0fb3780b541aa178f5b77bf. Apr 21 10:14:10.583238 containerd[1501]: time="2026-04-21T10:14:10.582866606Z" level=info msg="StartContainer for \"7e46183aca23c5c321b77168f0efa4bb783da5e5f0fb3780b541aa178f5b77bf\" returns successfully" Apr 21 10:14:13.935206 kubelet[2557]: E0421 10:14:13.934793 2557 controller.go:251] "Failed to update lease" err="Put \"https://37.27.23.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-16e5f88171?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 21 10:14:15.767918 systemd[1]: cri-containerd-a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5.scope: Deactivated successfully. Apr 21 10:14:15.810521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5-rootfs.mount: Deactivated successfully. Apr 21 10:14:15.816282 containerd[1501]: time="2026-04-21T10:14:15.816196940Z" level=info msg="shim disconnected" id=a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5 namespace=k8s.io Apr 21 10:14:15.816282 containerd[1501]: time="2026-04-21T10:14:15.816268398Z" level=warning msg="cleaning up after shim disconnected" id=a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5 namespace=k8s.io Apr 21 10:14:15.816282 containerd[1501]: time="2026-04-21T10:14:15.816284032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:14:16.521671 kubelet[2557]: I0421 10:14:16.521294 2557 scope.go:122] "RemoveContainer" containerID="22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b" Apr 21 10:14:16.522916 kubelet[2557]: I0421 10:14:16.522528 2557 scope.go:122] "RemoveContainer" containerID="a2a3e6f0866f885b97381651962a195c8c9405bd65af2d282991de8423e6f8a5" Apr 21 10:14:16.522916 kubelet[2557]: E0421 10:14:16.522835 2557 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-qvglx_tigera-operator(c3134158-eb81-415b-a377-54dd8df0b47b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-qvglx" podUID="c3134158-eb81-415b-a377-54dd8df0b47b" Apr 21 10:14:16.523211 containerd[1501]: time="2026-04-21T10:14:16.523115081Z" level=info msg="RemoveContainer for \"22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b\"" Apr 21 10:14:16.529514 containerd[1501]: time="2026-04-21T10:14:16.529386497Z" level=info msg="RemoveContainer for \"22223d3be912005ae027c1b186f270f3bd38a089f9f3b60c63ce33c4d092a96b\" returns successfully" Apr 21 10:14:23.935225 kubelet[2557]: E0421 10:14:23.935112 2557 controller.go:251] "Failed to update lease" err="Put \"https://37.27.23.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-7-16e5f88171?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"