Apr 13 20:08:34.007523 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:08:34.007540 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:08:34.007549 kernel: BIOS-provided physical RAM map: Apr 13 20:08:34.007554 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 13 20:08:34.007558 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 13 20:08:34.007562 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 13 20:08:34.007568 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 13 20:08:34.007572 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 13 20:08:34.007576 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 13 20:08:34.007581 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 13 20:08:34.007585 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 13 20:08:34.007592 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 13 20:08:34.007596 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 13 20:08:34.007601 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 13 20:08:34.007606 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 13 20:08:34.007611 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 13 20:08:34.007618 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 13 20:08:34.007622 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 13 20:08:34.007627 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 13 20:08:34.007631 kernel: NX (Execute Disable) protection: active Apr 13 20:08:34.007636 kernel: APIC: Static calls initialized Apr 13 20:08:34.007640 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 13 20:08:34.007645 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Apr 13 20:08:34.007650 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 13 20:08:34.007654 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 13 20:08:34.007659 kernel: SMBIOS 3.0.0 present. Apr 13 20:08:34.007664 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 13 20:08:34.007668 kernel: Hypervisor detected: KVM Apr 13 20:08:34.007675 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:08:34.007680 kernel: kvm-clock: using sched offset of 12615568394 cycles Apr 13 20:08:34.007685 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:08:34.007690 kernel: tsc: Detected 2400.000 MHz processor Apr 13 20:08:34.007695 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:08:34.007699 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:08:34.007704 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 13 20:08:34.007709 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 13 20:08:34.007713 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:08:34.007720 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 13 20:08:34.007725 kernel: Using GB pages for direct mapping Apr 13 20:08:34.007730 kernel: Secure boot disabled Apr 13 20:08:34.007738 kernel: ACPI: Early table checksum verification disabled Apr 13 20:08:34.007743 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 13 20:08:34.007748 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 13 20:08:34.007753 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:34.007761 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:34.007765 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 13 20:08:34.007770 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:34.007775 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:34.007780 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:34.007785 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 13 20:08:34.007790 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 13 20:08:34.007797 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 13 20:08:34.007802 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 13 20:08:34.007807 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 13 20:08:34.007812 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 13 20:08:34.007817 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 13 20:08:34.007822 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 13 20:08:34.007826 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 13 20:08:34.007831 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 13 20:08:34.007836 kernel: No NUMA configuration found Apr 13 20:08:34.007843 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 13 20:08:34.007848 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 13 20:08:34.007853 kernel: Zone ranges: Apr 13 20:08:34.007858 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:08:34.007863 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:08:34.007868 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:08:34.007873 kernel: Movable zone start for each node Apr 13 20:08:34.007878 kernel: Early memory node ranges Apr 13 20:08:34.007883 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 13 20:08:34.007888 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 13 20:08:34.007896 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 13 20:08:34.007900 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 13 20:08:34.007905 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 13 20:08:34.007910 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 13 20:08:34.007915 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:08:34.007920 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 13 20:08:34.007925 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 13 20:08:34.007930 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:08:34.007935 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 13 20:08:34.007942 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 13 20:08:34.007947 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 13 20:08:34.007952 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:08:34.007956 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:08:34.007961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 13 20:08:34.007966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:08:34.007971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:08:34.007976 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:08:34.007981 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:08:34.007989 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:08:34.007994 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 13 20:08:34.007998 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:08:34.008003 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 13 20:08:34.008008 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 13 20:08:34.008013 kernel: Booting paravirtualized kernel on KVM Apr 13 20:08:34.008018 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:08:34.008023 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:08:34.008028 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:08:34.008035 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:08:34.008040 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:08:34.008045 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 13 20:08:34.008051 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:08:34.008056 kernel: random: crng init done Apr 13 20:08:34.008061 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:08:34.008096 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 13 20:08:34.008101 kernel: Fallback order for Node 0: 0 Apr 13 20:08:34.008109 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 13 20:08:34.008113 kernel: Policy zone: Normal Apr 13 20:08:34.008119 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:08:34.008123 kernel: software IO TLB: area num 2. Apr 13 20:08:34.008129 kernel: Memory: 3819404K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 271560K reserved, 0K cma-reserved) Apr 13 20:08:34.008134 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:08:34.008138 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:08:34.008143 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:08:34.008148 kernel: Dynamic Preempt: voluntary Apr 13 20:08:34.008155 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:08:34.008161 kernel: rcu: RCU event tracing is enabled. Apr 13 20:08:34.008167 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:08:34.008172 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:08:34.008184 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:08:34.008191 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:08:34.008196 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:08:34.008202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:08:34.008208 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:08:34.008213 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:08:34.008218 kernel: Console: colour dummy device 80x25 Apr 13 20:08:34.008223 kernel: printk: console [tty0] enabled Apr 13 20:08:34.008231 kernel: printk: console [ttyS0] enabled Apr 13 20:08:34.008236 kernel: ACPI: Core revision 20230628 Apr 13 20:08:34.008242 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 13 20:08:34.008247 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:08:34.008252 kernel: x2apic enabled Apr 13 20:08:34.008259 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:08:34.008265 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 13 20:08:34.008270 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 13 20:08:34.008275 kernel: Calibrating delay loop (skipped) preset value.. 4800.00 BogoMIPS (lpj=2400000) Apr 13 20:08:34.008280 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 13 20:08:34.008285 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 13 20:08:34.008290 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 13 20:08:34.008296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:08:34.008301 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 13 20:08:34.008308 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:08:34.008314 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:08:34.008319 kernel: active return thunk: srso_alias_return_thunk Apr 13 20:08:34.008324 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 13 20:08:34.008329 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 13 20:08:34.008334 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:08:34.008339 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:08:34.008344 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:08:34.008350 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:08:34.008357 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 13 20:08:34.008362 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 13 20:08:34.008368 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 13 20:08:34.008373 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 13 20:08:34.008378 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:08:34.008383 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 13 20:08:34.008388 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 13 20:08:34.008393 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 13 20:08:34.008398 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 13 20:08:34.008406 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 13 20:08:34.008411 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:08:34.008416 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:08:34.008421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:08:34.008426 kernel: landlock: Up and running. Apr 13 20:08:34.008432 kernel: SELinux: Initializing. Apr 13 20:08:34.008437 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:08:34.008442 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 13 20:08:34.008447 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 13 20:08:34.008455 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:08:34.008460 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:08:34.008465 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:08:34.008470 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 13 20:08:34.008475 kernel: ... version: 0 Apr 13 20:08:34.008480 kernel: ... bit width: 48 Apr 13 20:08:34.008485 kernel: ... generic registers: 6 Apr 13 20:08:34.008491 kernel: ... value mask: 0000ffffffffffff Apr 13 20:08:34.008496 kernel: ... max period: 00007fffffffffff Apr 13 20:08:34.008503 kernel: ... fixed-purpose events: 0 Apr 13 20:08:34.008508 kernel: ... event mask: 000000000000003f Apr 13 20:08:34.008513 kernel: signal: max sigframe size: 3376 Apr 13 20:08:34.008518 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:08:34.008524 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:08:34.008529 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:08:34.008534 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:08:34.008539 kernel: .... node #0, CPUs: #1 Apr 13 20:08:34.008544 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:08:34.008552 kernel: smpboot: Max logical packages: 1 Apr 13 20:08:34.008557 kernel: smpboot: Total of 2 processors activated (9600.00 BogoMIPS) Apr 13 20:08:34.008562 kernel: devtmpfs: initialized Apr 13 20:08:34.008567 kernel: x86/mm: Memory block size: 128MB Apr 13 20:08:34.008572 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 13 20:08:34.008578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:08:34.008583 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:08:34.008588 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:08:34.008593 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:08:34.008600 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:08:34.008606 kernel: audit: type=2000 audit(1776110913.589:1): state=initialized audit_enabled=0 res=1 Apr 13 20:08:34.008611 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:08:34.008616 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:08:34.008621 kernel: cpuidle: using governor menu Apr 13 20:08:34.008626 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:08:34.008631 kernel: dca service started, version 1.12.1 Apr 13 20:08:34.008636 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 13 20:08:34.008641 kernel: PCI: Using configuration type 1 for base access Apr 13 20:08:34.008649 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:08:34.008654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:08:34.008659 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:08:34.008664 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:08:34.008670 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:08:34.008675 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:08:34.008680 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:08:34.008685 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:08:34.008690 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 13 20:08:34.008697 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:08:34.008702 kernel: ACPI: Interpreter enabled Apr 13 20:08:34.008708 kernel: ACPI: PM: (supports S0 S5) Apr 13 20:08:34.008713 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:08:34.008718 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:08:34.008723 kernel: PCI: Using E820 reservations for host bridge windows Apr 13 20:08:34.008728 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 13 20:08:34.008733 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:08:34.008899 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:08:34.009015 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 13 20:08:34.009138 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 13 20:08:34.009145 kernel: PCI host bridge to bus 0000:00 Apr 13 20:08:34.009245 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:08:34.009335 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:08:34.009426 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:08:34.009518 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 13 20:08:34.009606 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 13 20:08:34.009694 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:08:34.009783 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:08:34.009892 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 13 20:08:34.009997 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 13 20:08:34.010125 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 13 20:08:34.010245 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 13 20:08:34.010343 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 13 20:08:34.010439 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 13 20:08:34.010535 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 13 20:08:34.010632 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 13 20:08:34.010736 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.010837 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 13 20:08:34.010943 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.011039 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 13 20:08:34.011165 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.011262 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 13 20:08:34.011365 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.011464 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 13 20:08:34.011566 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.011662 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 13 20:08:34.011764 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.011868 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 13 20:08:34.011973 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.012098 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 13 20:08:34.012204 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.012299 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 13 20:08:34.012401 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 13 20:08:34.012496 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 13 20:08:34.012596 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 13 20:08:34.012691 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 13 20:08:34.012798 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 13 20:08:34.012894 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 13 20:08:34.012988 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 13 20:08:34.013130 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 13 20:08:34.013238 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 13 20:08:34.013344 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:08:34.013452 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 13 20:08:34.013554 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 13 20:08:34.013653 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:08:34.013748 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:08:34.013842 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:08:34.013936 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:08:34.014041 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 13 20:08:34.014426 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 13 20:08:34.014528 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:08:34.014624 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:08:34.014743 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 13 20:08:34.014850 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 13 20:08:34.014962 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 13 20:08:34.015063 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:08:34.015181 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:08:34.015281 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:08:34.015409 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 13 20:08:34.015512 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 13 20:08:34.015608 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:08:34.015702 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:08:34.015811 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 13 20:08:34.015914 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 13 20:08:34.016466 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 13 20:08:34.016585 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:08:34.016682 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:08:34.016778 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:08:34.018207 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 13 20:08:34.018319 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 13 20:08:34.018426 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 13 20:08:34.018523 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:08:34.018619 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:08:34.018744 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:08:34.018751 kernel: acpiphp: Slot [0] registered Apr 13 20:08:34.018860 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 13 20:08:34.018961 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 13 20:08:34.019064 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 13 20:08:34.020206 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 13 20:08:34.020305 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:08:34.020401 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:08:34.020496 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:08:34.020503 kernel: acpiphp: Slot [0-2] registered Apr 13 20:08:34.020599 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:08:34.020693 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:08:34.020787 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:08:34.020797 kernel: acpiphp: Slot [0-3] registered Apr 13 20:08:34.020988 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:08:34.022141 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:08:34.022266 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:08:34.022274 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:08:34.022280 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:08:34.022285 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:08:34.022290 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:08:34.022300 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 13 20:08:34.022305 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 13 20:08:34.022311 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 13 20:08:34.022316 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 13 20:08:34.022321 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 13 20:08:34.022326 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 13 20:08:34.022332 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 13 20:08:34.022337 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 13 20:08:34.022342 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 13 20:08:34.022350 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 13 20:08:34.022355 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 13 20:08:34.022360 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 13 20:08:34.022366 kernel: iommu: Default domain type: Translated Apr 13 20:08:34.022371 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:08:34.022376 kernel: efivars: Registered efivars operations Apr 13 20:08:34.022381 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:08:34.022386 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:08:34.022392 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 13 20:08:34.022400 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 13 20:08:34.022405 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 13 20:08:34.022410 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 13 20:08:34.022508 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 13 20:08:34.022604 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 13 20:08:34.022698 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 13 20:08:34.022705 kernel: vgaarb: loaded Apr 13 20:08:34.022710 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 13 20:08:34.022716 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 13 20:08:34.022723 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:08:34.022766 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:08:34.022772 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:08:34.022777 kernel: pnp: PnP ACPI init Apr 13 20:08:34.022901 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 13 20:08:34.022911 kernel: pnp: PnP ACPI: found 5 devices Apr 13 20:08:34.022916 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:08:34.022922 kernel: NET: Registered PF_INET protocol family Apr 13 20:08:34.022943 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:08:34.022951 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 13 20:08:34.022956 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:08:34.022962 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 13 20:08:34.022968 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 13 20:08:34.022973 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 13 20:08:34.022979 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:08:34.022984 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 13 20:08:34.022989 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:08:34.022997 kernel: NET: Registered PF_XDP protocol family Apr 13 20:08:34.023167 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:08:34.023292 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 13 20:08:34.023392 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 13 20:08:34.023488 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 13 20:08:34.023585 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 13 20:08:34.023681 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 13 20:08:34.023782 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 13 20:08:34.023880 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 13 20:08:34.023979 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 13 20:08:34.026112 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 13 20:08:34.026234 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 13 20:08:34.026334 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:08:34.026430 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 13 20:08:34.026527 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 13 20:08:34.026624 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 13 20:08:34.026720 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 13 20:08:34.026818 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:08:34.026912 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 13 20:08:34.027007 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:08:34.028184 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 13 20:08:34.028289 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 13 20:08:34.028390 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:08:34.028485 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 13 20:08:34.028580 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 13 20:08:34.028674 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:08:34.028774 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 13 20:08:34.028872 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 13 20:08:34.028966 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 13 20:08:34.029061 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 13 20:08:34.030162 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:08:34.030263 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 13 20:08:34.030358 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 13 20:08:34.030452 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 13 20:08:34.030547 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:08:34.030641 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 13 20:08:34.030742 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 13 20:08:34.030843 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 13 20:08:34.030938 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:08:34.031031 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:08:34.031933 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:08:34.032034 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:08:34.032160 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 13 20:08:34.032249 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 13 20:08:34.032338 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 13 20:08:34.032437 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 13 20:08:34.032533 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 13 20:08:34.032642 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 13 20:08:34.032754 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 13 20:08:34.032848 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 13 20:08:34.032946 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 13 20:08:34.033045 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 13 20:08:34.033168 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 13 20:08:34.033271 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 13 20:08:34.033368 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 13 20:08:34.033466 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 13 20:08:34.033558 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 13 20:08:34.033651 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 13 20:08:34.033747 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 13 20:08:34.033839 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 13 20:08:34.033931 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 13 20:08:34.034032 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 13 20:08:34.034176 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 13 20:08:34.034269 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 13 20:08:34.034276 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 13 20:08:34.034282 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:08:34.034288 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:08:34.034294 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 13 20:08:34.034302 kernel: Initialise system trusted keyrings Apr 13 20:08:34.034308 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 13 20:08:34.034314 kernel: Key type asymmetric registered Apr 13 20:08:34.034320 kernel: Asymmetric key parser 'x509' registered Apr 13 20:08:34.034325 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:08:34.034330 kernel: io scheduler mq-deadline registered Apr 13 20:08:34.034336 kernel: io scheduler kyber registered Apr 13 20:08:34.034341 kernel: io scheduler bfq registered Apr 13 20:08:34.034440 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 13 20:08:34.034536 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 13 20:08:34.034643 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 13 20:08:34.034738 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 13 20:08:34.034833 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 13 20:08:34.034928 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 13 20:08:34.035023 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 13 20:08:34.035147 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 13 20:08:34.035242 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 13 20:08:34.035336 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 13 20:08:34.035435 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 13 20:08:34.035529 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 13 20:08:34.035623 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 13 20:08:34.035719 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 13 20:08:34.035813 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 13 20:08:34.035907 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 13 20:08:34.035914 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 13 20:08:34.036007 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 13 20:08:34.036174 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 13 20:08:34.036182 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:08:34.036188 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 13 20:08:34.036194 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:08:34.036200 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:08:34.036205 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:08:34.036211 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:08:34.036217 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:08:34.036222 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:08:34.036325 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 13 20:08:34.036417 kernel: rtc_cmos 00:03: registered as rtc0 Apr 13 20:08:34.036516 kernel: rtc_cmos 00:03: setting system clock to 2026-04-13T20:08:33 UTC (1776110913) Apr 13 20:08:34.036606 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 13 20:08:34.036613 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 13 20:08:34.036619 kernel: efifb: probing for efifb Apr 13 20:08:34.036624 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 13 20:08:34.036633 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 13 20:08:34.036639 kernel: efifb: scrolling: redraw Apr 13 20:08:34.036644 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 13 20:08:34.036650 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:08:34.036656 kernel: fb0: EFI VGA frame buffer device Apr 13 20:08:34.036661 kernel: pstore: Using crash dump compression: deflate Apr 13 20:08:34.036666 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:08:34.036672 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:08:34.036677 kernel: Segment Routing with IPv6 Apr 13 20:08:34.036683 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:08:34.036691 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:08:34.036697 kernel: Key type dns_resolver registered Apr 13 20:08:34.036702 kernel: IPI shorthand broadcast: enabled Apr 13 20:08:34.036708 kernel: sched_clock: Marking stable (1302015580, 224751410)->(1576848560, -50081570) Apr 13 20:08:34.036714 kernel: registered taskstats version 1 Apr 13 20:08:34.036719 kernel: Loading compiled-in X.509 certificates Apr 13 20:08:34.036725 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:08:34.036731 kernel: Key type .fscrypt registered Apr 13 20:08:34.036737 kernel: Key type fscrypt-provisioning registered Apr 13 20:08:34.036745 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 13 20:08:34.036750 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:08:34.036756 kernel: ima: No architecture policies found Apr 13 20:08:34.036761 kernel: clk: Disabling unused clocks Apr 13 20:08:34.036767 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:08:34.036772 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:08:34.036778 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:08:34.036783 kernel: Run /init as init process Apr 13 20:08:34.036792 kernel: with arguments: Apr 13 20:08:34.036797 kernel: /init Apr 13 20:08:34.036803 kernel: with environment: Apr 13 20:08:34.036808 kernel: HOME=/ Apr 13 20:08:34.036814 kernel: TERM=linux Apr 13 20:08:34.036821 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:08:34.036828 systemd[1]: Detected virtualization kvm. Apr 13 20:08:34.036835 systemd[1]: Detected architecture x86-64. Apr 13 20:08:34.036843 systemd[1]: Running in initrd. Apr 13 20:08:34.036849 systemd[1]: No hostname configured, using default hostname. Apr 13 20:08:34.036855 systemd[1]: Hostname set to . Apr 13 20:08:34.036863 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:08:34.036869 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:08:34.036874 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:08:34.036880 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:08:34.036887 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:08:34.036895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:08:34.036901 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:08:34.036907 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:08:34.036914 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:08:34.036920 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:08:34.036925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:08:34.036934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:08:34.036940 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:08:34.036945 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:08:34.036951 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:08:34.036957 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:08:34.036963 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:08:34.036969 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:08:34.036974 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:08:34.036980 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:08:34.036989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:08:34.036994 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:08:34.037000 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:08:34.037006 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:08:34.037012 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:08:34.037018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:08:34.037023 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:08:34.037029 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:08:34.037035 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:08:34.037044 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:08:34.037049 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:34.037123 systemd-journald[189]: Collecting audit messages is disabled. Apr 13 20:08:34.037139 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:08:34.037147 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:08:34.037153 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:08:34.037160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:08:34.037166 systemd-journald[189]: Journal started Apr 13 20:08:34.037182 systemd-journald[189]: Runtime Journal (/run/log/journal/f4506fab38a7466fa446364b3c9a1980) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:08:34.021199 systemd-modules-load[190]: Inserted module 'overlay' Apr 13 20:08:34.041101 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:08:34.041457 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:34.051100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:08:34.054105 kernel: Bridge firewalling registered Apr 13 20:08:34.053504 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 13 20:08:34.056557 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:08:34.058935 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:08:34.060959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:08:34.062687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:08:34.070267 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:08:34.073488 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:08:34.074970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:08:34.081279 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:08:34.087264 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:08:34.089023 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:34.090100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:08:34.092229 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:08:34.106940 dracut-cmdline[225]: dracut-dracut-053 Apr 13 20:08:34.110549 systemd-resolved[222]: Positive Trust Anchors: Apr 13 20:08:34.111139 systemd-resolved[222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:08:34.112574 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:08:34.111162 systemd-resolved[222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:08:34.114193 systemd-resolved[222]: Defaulting to hostname 'linux'. Apr 13 20:08:34.115102 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:08:34.118175 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:08:34.175122 kernel: SCSI subsystem initialized Apr 13 20:08:34.183105 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:08:34.193099 kernel: iscsi: registered transport (tcp) Apr 13 20:08:34.209360 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:08:34.209409 kernel: QLogic iSCSI HBA Driver Apr 13 20:08:34.276556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:08:34.283205 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:08:34.307203 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:08:34.307251 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:08:34.309968 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:08:34.355115 kernel: raid6: avx512x4 gen() 44830 MB/s Apr 13 20:08:34.373123 kernel: raid6: avx512x2 gen() 46076 MB/s Apr 13 20:08:34.391124 kernel: raid6: avx512x1 gen() 43467 MB/s Apr 13 20:08:34.409131 kernel: raid6: avx2x4 gen() 45859 MB/s Apr 13 20:08:34.427129 kernel: raid6: avx2x2 gen() 48717 MB/s Apr 13 20:08:34.446259 kernel: raid6: avx2x1 gen() 39177 MB/s Apr 13 20:08:34.446323 kernel: raid6: using algorithm avx2x2 gen() 48717 MB/s Apr 13 20:08:34.466309 kernel: raid6: .... xor() 36488 MB/s, rmw enabled Apr 13 20:08:34.466368 kernel: raid6: using avx512x2 recovery algorithm Apr 13 20:08:34.503121 kernel: xor: automatically using best checksumming function avx Apr 13 20:08:34.612135 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:08:34.628429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:08:34.634438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:08:34.644935 systemd-udevd[407]: Using default interface naming scheme 'v255'. Apr 13 20:08:34.648929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:08:34.657338 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:08:34.678716 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Apr 13 20:08:34.721707 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:08:34.729292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:08:34.808835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:08:34.814206 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:08:34.825466 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:08:34.826307 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:08:34.826963 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:08:34.827633 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:08:34.832211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:08:34.843365 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:08:34.899095 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:08:34.903099 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:08:34.905746 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:08:34.907232 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:34.907826 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:08:34.908151 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:34.908228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:34.908555 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:34.921985 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 13 20:08:34.918318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:34.923146 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:34.923231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:34.933197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:34.950634 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:34.957148 kernel: ACPI: bus type USB registered Apr 13 20:08:34.957266 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:08:34.960322 kernel: usbcore: registered new interface driver usbfs Apr 13 20:08:34.966392 kernel: usbcore: registered new interface driver hub Apr 13 20:08:34.966413 kernel: libata version 3.00 loaded. Apr 13 20:08:34.968106 kernel: usbcore: registered new device driver usb Apr 13 20:08:34.971269 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:08:34.983301 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:34.989632 kernel: AES CTR mode by8 optimization enabled Apr 13 20:08:34.996292 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:08:34.996500 kernel: ahci 0000:00:1f.2: version 3.0 Apr 13 20:08:34.996648 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 13 20:08:35.000341 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 13 20:08:35.001563 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 13 20:08:35.004259 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 13 20:08:35.006107 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 13 20:08:35.012497 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 13 20:08:35.015127 kernel: scsi host1: ahci Apr 13 20:08:35.015162 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 13 20:08:35.020302 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 13 20:08:35.020462 kernel: scsi host2: ahci Apr 13 20:08:35.022001 kernel: scsi host3: ahci Apr 13 20:08:35.024368 kernel: hub 1-0:1.0: USB hub found Apr 13 20:08:35.024551 kernel: hub 1-0:1.0: 4 ports detected Apr 13 20:08:35.026621 kernel: scsi host4: ahci Apr 13 20:08:35.031422 kernel: scsi host5: ahci Apr 13 20:08:35.031456 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 13 20:08:35.037171 kernel: scsi host6: ahci Apr 13 20:08:35.037323 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Apr 13 20:08:35.037332 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Apr 13 20:08:35.039984 kernel: hub 2-0:1.0: USB hub found Apr 13 20:08:35.040216 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Apr 13 20:08:35.044109 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Apr 13 20:08:35.044130 kernel: hub 2-0:1.0: 4 ports detected Apr 13 20:08:35.044277 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Apr 13 20:08:35.049114 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Apr 13 20:08:35.056674 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 13 20:08:35.056860 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 13 20:08:35.063265 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 13 20:08:35.063465 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 13 20:08:35.063595 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:08:35.068337 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:08:35.068351 kernel: GPT:17805311 != 160006143 Apr 13 20:08:35.071041 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:08:35.071052 kernel: GPT:17805311 != 160006143 Apr 13 20:08:35.073796 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:08:35.073807 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:35.077142 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 13 20:08:35.272201 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 13 20:08:35.364142 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:35.364253 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 13 20:08:35.378368 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:35.378435 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:35.378465 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 13 20:08:35.385761 kernel: ata1.00: applying bridge limits Apr 13 20:08:35.388212 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:35.394104 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 13 20:08:35.394136 kernel: ata1.00: configured for UDMA/100 Apr 13 20:08:35.405174 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 13 20:08:35.436104 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 13 20:08:35.444097 kernel: usbcore: registered new interface driver usbhid Apr 13 20:08:35.444156 kernel: usbhid: USB HID core driver Apr 13 20:08:35.450483 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 13 20:08:35.463446 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 13 20:08:35.463650 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 13 20:08:35.463788 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 13 20:08:35.467091 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (479) Apr 13 20:08:35.472167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 13 20:08:35.478963 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (480) Apr 13 20:08:35.478986 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 13 20:08:35.485395 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 13 20:08:35.489467 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 13 20:08:35.490404 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 13 20:08:35.494274 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:08:35.499435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:08:35.505882 disk-uuid[580]: Primary Header is updated. Apr 13 20:08:35.505882 disk-uuid[580]: Secondary Entries is updated. Apr 13 20:08:35.505882 disk-uuid[580]: Secondary Header is updated. Apr 13 20:08:35.512107 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:35.519102 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:36.533264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:08:36.536047 disk-uuid[581]: The operation has completed successfully. Apr 13 20:08:36.616353 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:08:36.616472 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:08:36.622243 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:08:36.624875 sh[602]: Success Apr 13 20:08:36.637852 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 13 20:08:36.672327 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:08:36.680173 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:08:36.681049 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:08:36.694898 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:08:36.694930 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:36.694939 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:08:36.699453 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:08:36.699472 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:08:36.708097 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:08:36.709933 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:08:36.710962 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:08:36.716182 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:08:36.718206 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:08:36.733447 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:36.733477 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:36.733487 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:08:36.741423 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:08:36.741447 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:08:36.749361 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:08:36.752149 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:36.757599 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:08:36.762224 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:08:36.811472 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:08:36.820219 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:08:36.827855 ignition[715]: Ignition 2.19.0 Apr 13 20:08:36.828414 ignition[715]: Stage: fetch-offline Apr 13 20:08:36.828801 ignition[715]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:36.828812 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:36.830497 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:08:36.829187 ignition[715]: parsed url from cmdline: "" Apr 13 20:08:36.829197 ignition[715]: no config URL provided Apr 13 20:08:36.829202 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:08:36.829210 ignition[715]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:08:36.829215 ignition[715]: failed to fetch config: resource requires networking Apr 13 20:08:36.829465 ignition[715]: Ignition finished successfully Apr 13 20:08:36.839193 systemd-networkd[787]: lo: Link UP Apr 13 20:08:36.839201 systemd-networkd[787]: lo: Gained carrier Apr 13 20:08:36.841549 systemd-networkd[787]: Enumeration completed Apr 13 20:08:36.841617 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:08:36.842601 systemd[1]: Reached target network.target - Network. Apr 13 20:08:36.842645 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:36.842648 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:36.844762 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:36.844771 systemd-networkd[787]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:36.846254 systemd-networkd[787]: eth0: Link UP Apr 13 20:08:36.846260 systemd-networkd[787]: eth0: Gained carrier Apr 13 20:08:36.846267 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:36.850206 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:08:36.850753 systemd-networkd[787]: eth1: Link UP Apr 13 20:08:36.850757 systemd-networkd[787]: eth1: Gained carrier Apr 13 20:08:36.850764 systemd-networkd[787]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:36.860367 ignition[791]: Ignition 2.19.0 Apr 13 20:08:36.860379 ignition[791]: Stage: fetch Apr 13 20:08:36.860534 ignition[791]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:36.860543 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:36.860609 ignition[791]: parsed url from cmdline: "" Apr 13 20:08:36.860613 ignition[791]: no config URL provided Apr 13 20:08:36.860617 ignition[791]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:08:36.860624 ignition[791]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:08:36.860637 ignition[791]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 13 20:08:36.860752 ignition[791]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 13 20:08:36.879125 systemd-networkd[787]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:08:36.906115 systemd-networkd[787]: eth0: DHCPv4 address 62.238.1.80/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:08:37.061237 ignition[791]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 13 20:08:37.066996 ignition[791]: GET result: OK Apr 13 20:08:37.067051 ignition[791]: parsing config with SHA512: d4e7260c59940982ba0b668e828448038cc89fbf52351442a4c613074f17ee763314570b1f5c1427dde4e556bb4d5645890e295c99914e66368b1d09f9f9ac6b Apr 13 20:08:37.069339 unknown[791]: fetched base config from "system" Apr 13 20:08:37.069781 unknown[791]: fetched base config from "system" Apr 13 20:08:37.069979 ignition[791]: fetch: fetch complete Apr 13 20:08:37.069788 unknown[791]: fetched user config from "hetzner" Apr 13 20:08:37.069984 ignition[791]: fetch: fetch passed Apr 13 20:08:37.070022 ignition[791]: Ignition finished successfully Apr 13 20:08:37.073847 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:08:37.080305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:08:37.108206 ignition[798]: Ignition 2.19.0 Apr 13 20:08:37.109132 ignition[798]: Stage: kargs Apr 13 20:08:37.109284 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:37.109295 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:37.109810 ignition[798]: kargs: kargs passed Apr 13 20:08:37.109850 ignition[798]: Ignition finished successfully Apr 13 20:08:37.111486 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:08:37.119268 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:08:37.131054 ignition[806]: Ignition 2.19.0 Apr 13 20:08:37.131063 ignition[806]: Stage: disks Apr 13 20:08:37.131421 ignition[806]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:37.131440 ignition[806]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:37.132690 ignition[806]: disks: disks passed Apr 13 20:08:37.132753 ignition[806]: Ignition finished successfully Apr 13 20:08:37.134468 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:08:37.135003 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:08:37.135586 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:08:37.136279 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:08:37.136932 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:08:37.137642 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:08:37.142223 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:08:37.157892 systemd-fsck[814]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:08:37.161665 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:08:37.168112 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:08:37.238133 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:08:37.239337 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:08:37.241039 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:08:37.247193 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:08:37.250207 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:08:37.252248 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 13 20:08:37.253316 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:08:37.254187 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:08:37.257837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:08:37.264119 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (822) Apr 13 20:08:37.265042 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:08:37.274363 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:37.274431 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:37.274441 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:08:37.282006 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:08:37.282060 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:08:37.287382 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:08:37.316416 coreos-metadata[824]: Apr 13 20:08:37.316 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 13 20:08:37.317579 coreos-metadata[824]: Apr 13 20:08:37.317 INFO Fetch successful Apr 13 20:08:37.317579 coreos-metadata[824]: Apr 13 20:08:37.317 INFO wrote hostname ci-4081-3-7-1-0f1354cb62 to /sysroot/etc/hostname Apr 13 20:08:37.320235 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:08:37.328499 initrd-setup-root[850]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:08:37.334293 initrd-setup-root[857]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:08:37.339056 initrd-setup-root[864]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:08:37.343265 initrd-setup-root[871]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:08:37.430587 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:08:37.436155 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:08:37.437570 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:08:37.449107 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:37.467699 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:08:37.468930 ignition[943]: INFO : Ignition 2.19.0 Apr 13 20:08:37.468930 ignition[943]: INFO : Stage: mount Apr 13 20:08:37.468930 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:37.468930 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:37.470549 ignition[943]: INFO : mount: mount passed Apr 13 20:08:37.470549 ignition[943]: INFO : Ignition finished successfully Apr 13 20:08:37.471186 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:08:37.478250 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:08:37.691947 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:08:37.698356 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:08:37.738158 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (954) Apr 13 20:08:37.748433 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:08:37.748516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:08:37.754529 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:08:37.770770 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:08:37.770854 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:08:37.775549 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:08:37.813174 ignition[971]: INFO : Ignition 2.19.0 Apr 13 20:08:37.814498 ignition[971]: INFO : Stage: files Apr 13 20:08:37.815161 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:37.815161 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:37.816362 ignition[971]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:08:37.817676 ignition[971]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:08:37.817676 ignition[971]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:08:37.822587 ignition[971]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:08:37.823797 ignition[971]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:08:37.823797 ignition[971]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:08:37.823703 unknown[971]: wrote ssh authorized keys file for user: core Apr 13 20:08:37.827367 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:08:37.829135 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:08:38.083158 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:08:38.396316 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:08:38.396316 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:08:38.400234 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 13 20:08:38.576805 systemd-networkd[787]: eth1: Gained IPv6LL Apr 13 20:08:38.768606 systemd-networkd[787]: eth0: Gained IPv6LL Apr 13 20:08:38.955788 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:08:41.596987 ignition[971]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:08:41.596987 ignition[971]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:08:41.600316 ignition[971]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:08:41.613769 ignition[971]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:08:41.613769 ignition[971]: INFO : files: files passed Apr 13 20:08:41.613769 ignition[971]: INFO : Ignition finished successfully Apr 13 20:08:41.603734 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:08:41.614520 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:08:41.622550 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:08:41.626266 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:08:41.626433 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:08:41.666194 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:08:41.666194 initrd-setup-root-after-ignition[1000]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:08:41.670574 initrd-setup-root-after-ignition[1004]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:08:41.674245 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:08:41.676138 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:08:41.683348 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:08:41.732131 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:08:41.732345 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:08:41.735142 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:08:41.737387 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:08:41.738511 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:08:41.744339 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:08:41.767254 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:08:41.772194 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:08:41.790666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:08:41.791203 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:08:41.792358 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:08:41.793485 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:08:41.793578 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:08:41.795186 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:08:41.796286 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:08:41.797289 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:08:41.798331 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:08:41.799365 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:08:41.800390 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:08:41.801413 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:08:41.802444 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:08:41.803481 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:08:41.804486 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:08:41.805499 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:08:41.805574 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:08:41.807215 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:08:41.808289 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:08:41.809373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:08:41.809454 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:08:41.810374 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:08:41.810446 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:08:41.812299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:08:41.812398 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:08:41.814860 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:08:41.814951 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:08:41.816239 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 13 20:08:41.816313 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 13 20:08:41.826506 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:08:41.827754 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:08:41.830115 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:08:41.830238 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:08:41.832672 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:08:41.832771 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:08:41.841644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:08:41.841734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:08:41.844434 ignition[1024]: INFO : Ignition 2.19.0 Apr 13 20:08:41.844434 ignition[1024]: INFO : Stage: umount Apr 13 20:08:41.844434 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:08:41.844434 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 13 20:08:41.845899 ignition[1024]: INFO : umount: umount passed Apr 13 20:08:41.845899 ignition[1024]: INFO : Ignition finished successfully Apr 13 20:08:41.847336 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:08:41.847429 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:08:41.848929 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:08:41.848994 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:08:41.849666 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:08:41.849704 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:08:41.850316 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:08:41.850352 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:08:41.852188 systemd[1]: Stopped target network.target - Network. Apr 13 20:08:41.852769 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:08:41.852811 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:08:41.853499 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:08:41.857131 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:08:41.861115 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:08:41.861471 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:08:41.862095 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:08:41.862733 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:08:41.862771 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:08:41.863371 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:08:41.863404 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:08:41.863960 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:08:41.864001 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:08:41.864614 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:08:41.864652 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:08:41.865362 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:08:41.866113 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:08:41.867568 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:08:41.868019 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:08:41.868142 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:08:41.868855 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:08:41.868923 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:08:41.869109 systemd-networkd[787]: eth1: DHCPv6 lease lost Apr 13 20:08:41.871832 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:08:41.871932 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:08:41.873121 systemd-networkd[787]: eth0: DHCPv6 lease lost Apr 13 20:08:41.873339 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:08:41.873410 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:08:41.874906 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:08:41.875027 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:08:41.876293 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:08:41.876348 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:08:41.880151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:08:41.880470 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:08:41.880511 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:08:41.880921 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:08:41.880957 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:08:41.881547 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:08:41.881583 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:08:41.883222 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:08:41.890012 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:08:41.890126 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:08:41.891526 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:08:41.891667 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:08:41.892699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:08:41.892760 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:08:41.893488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:08:41.893521 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:08:41.894116 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:08:41.894155 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:08:41.895204 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:08:41.895242 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:08:41.896160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:08:41.896196 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:08:41.910389 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:08:41.911234 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:08:41.911634 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:08:41.912000 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:08:41.912035 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:08:41.912438 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:08:41.912474 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:08:41.912843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:41.912878 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:41.916271 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:08:41.916360 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:08:41.917087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:08:41.926239 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:08:41.931950 systemd[1]: Switching root. Apr 13 20:08:41.970585 systemd-journald[189]: Journal stopped Apr 13 20:08:42.983184 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Apr 13 20:08:42.983241 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:08:42.983252 kernel: SELinux: policy capability open_perms=1 Apr 13 20:08:42.983260 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:08:42.983274 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:08:42.983282 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:08:42.983291 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:08:42.983301 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:08:42.983313 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:08:42.983324 kernel: audit: type=1403 audit(1776110922.119:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:08:42.983338 systemd[1]: Successfully loaded SELinux policy in 68.674ms. Apr 13 20:08:42.983364 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.066ms. Apr 13 20:08:42.983377 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:08:42.983386 systemd[1]: Detected virtualization kvm. Apr 13 20:08:42.983397 systemd[1]: Detected architecture x86-64. Apr 13 20:08:42.983406 systemd[1]: Detected first boot. Apr 13 20:08:42.983415 systemd[1]: Hostname set to . Apr 13 20:08:42.983424 systemd[1]: Initializing machine ID from VM UUID. Apr 13 20:08:42.983433 zram_generator::config[1067]: No configuration found. Apr 13 20:08:42.983447 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:08:42.983456 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:08:42.983464 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:08:42.983476 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:08:42.983485 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:08:42.983494 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:08:42.983503 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:08:42.983512 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:08:42.983521 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:08:42.983530 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:08:42.983539 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:08:42.983550 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:08:42.983559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:08:42.983569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:08:42.983578 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:08:42.983591 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:08:42.983600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:08:42.983609 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:08:42.983618 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:08:42.983627 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:08:42.983636 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:08:42.983647 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:08:42.983656 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:08:42.983665 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:08:42.983674 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:08:42.983683 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:08:42.983692 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:08:42.983703 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:08:42.983712 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:08:42.983722 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:08:42.983731 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:08:42.983739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:08:42.983749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:08:42.983757 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:08:42.983766 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:08:42.983775 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:08:42.983786 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:08:42.983795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:42.983804 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:08:42.983813 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:08:42.983823 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:08:42.983832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:08:42.983841 systemd[1]: Reached target machines.target - Containers. Apr 13 20:08:42.983850 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:08:42.983861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:08:42.983870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:08:42.983879 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:08:42.983887 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:08:42.983896 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:08:42.983905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:08:42.983914 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:08:42.983923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:08:42.983932 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:08:42.983953 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:08:42.983962 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:08:42.983973 kernel: fuse: init (API version 7.39) Apr 13 20:08:42.983982 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:08:42.983991 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:08:42.983999 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:08:42.984008 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:08:42.984017 kernel: loop: module loaded Apr 13 20:08:42.984028 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:08:42.984037 kernel: ACPI: bus type drm_connector registered Apr 13 20:08:42.984046 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:08:42.984082 systemd-journald[1157]: Collecting audit messages is disabled. Apr 13 20:08:42.984105 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:08:42.984114 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:08:42.984123 systemd[1]: Stopped verity-setup.service. Apr 13 20:08:42.984135 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:42.984144 systemd-journald[1157]: Journal started Apr 13 20:08:42.984160 systemd-journald[1157]: Runtime Journal (/run/log/journal/f4506fab38a7466fa446364b3c9a1980) is 8.0M, max 76.3M, 68.3M free. Apr 13 20:08:42.651921 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:08:42.668550 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:08:42.669888 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:08:42.986166 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:08:42.990653 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:08:42.991188 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:08:42.991688 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:08:42.992154 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:08:42.992587 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:08:42.993010 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:08:42.993601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:08:42.994207 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:08:42.994802 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:08:42.994926 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:08:42.995550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:08:42.995667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:08:42.996332 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:08:42.996450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:08:42.997327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:08:42.997455 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:08:42.998048 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:08:42.998284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:08:42.998879 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:08:42.999001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:08:42.999591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:08:43.000153 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:08:43.000688 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:08:43.010842 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:08:43.017166 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:08:43.021117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:08:43.021483 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:08:43.021504 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:08:43.022910 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:08:43.029932 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:08:43.032196 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:08:43.033233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:08:43.037064 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:08:43.040168 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:08:43.040529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:08:43.041311 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:08:43.042147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:08:43.047661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:08:43.050446 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:08:43.054190 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:08:43.055851 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:08:43.062186 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:08:43.062751 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:08:43.063346 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:08:43.067700 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:08:43.074252 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:08:43.097185 systemd-journald[1157]: Time spent on flushing to /var/log/journal/f4506fab38a7466fa446364b3c9a1980 is 52.122ms for 1183 entries. Apr 13 20:08:43.097185 systemd-journald[1157]: System Journal (/var/log/journal/f4506fab38a7466fa446364b3c9a1980) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:08:43.172571 systemd-journald[1157]: Received client request to flush runtime journal. Apr 13 20:08:43.173136 kernel: loop0: detected capacity change from 0 to 8 Apr 13 20:08:43.173171 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:08:43.173183 kernel: loop1: detected capacity change from 0 to 142488 Apr 13 20:08:43.109010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:08:43.114534 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:08:43.121502 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:08:43.170644 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 13 20:08:43.170656 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 13 20:08:43.172849 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:08:43.182728 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:08:43.184570 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:08:43.185498 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:08:43.195222 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:08:43.208927 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:08:43.220181 kernel: loop2: detected capacity change from 0 to 140768 Apr 13 20:08:43.234266 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:08:43.246121 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:08:43.267901 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Apr 13 20:08:43.268162 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Apr 13 20:08:43.273089 kernel: loop3: detected capacity change from 0 to 217752 Apr 13 20:08:43.276295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:08:43.317102 kernel: loop4: detected capacity change from 0 to 8 Apr 13 20:08:43.324115 kernel: loop5: detected capacity change from 0 to 142488 Apr 13 20:08:43.341106 kernel: loop6: detected capacity change from 0 to 140768 Apr 13 20:08:43.363105 kernel: loop7: detected capacity change from 0 to 217752 Apr 13 20:08:43.386415 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 13 20:08:43.387008 (sd-merge)[1215]: Merged extensions into '/usr'. Apr 13 20:08:43.395325 systemd[1]: Reloading requested from client PID 1187 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:08:43.395494 systemd[1]: Reloading... Apr 13 20:08:43.470115 zram_generator::config[1244]: No configuration found. Apr 13 20:08:43.556862 ldconfig[1182]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:08:43.571814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:08:43.608883 systemd[1]: Reloading finished in 212 ms. Apr 13 20:08:43.641272 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:08:43.642049 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:08:43.644254 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:08:43.652387 systemd[1]: Starting ensure-sysext.service... Apr 13 20:08:43.657349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:08:43.664208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:08:43.669813 systemd[1]: Reloading requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:08:43.669834 systemd[1]: Reloading... Apr 13 20:08:43.691766 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:08:43.693616 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:08:43.694481 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:08:43.694734 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 13 20:08:43.694796 systemd-udevd[1287]: Using default interface naming scheme 'v255'. Apr 13 20:08:43.694884 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Apr 13 20:08:43.698838 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:08:43.698914 systemd-tmpfiles[1286]: Skipping /boot Apr 13 20:08:43.717797 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:08:43.720242 systemd-tmpfiles[1286]: Skipping /boot Apr 13 20:08:43.740093 zram_generator::config[1313]: No configuration found. Apr 13 20:08:43.858273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:08:43.916013 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:08:43.916718 systemd[1]: Reloading finished in 246 ms. Apr 13 20:08:43.921093 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1319) Apr 13 20:08:43.928023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:08:43.932610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:08:43.933265 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:08:43.957228 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:08:43.962219 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:08:43.972191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:08:43.975764 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:08:43.979173 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:08:43.988095 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:08:43.988193 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:08:43.989356 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:08:44.005996 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:44.006522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:08:44.013242 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:08:44.015306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:08:44.023244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:08:44.023650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:08:44.030241 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:08:44.030566 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:44.031773 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:08:44.034023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:08:44.036590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:08:44.040806 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 13 20:08:44.049610 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:44.049798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:08:44.057262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:08:44.057691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:08:44.060918 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:08:44.062123 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:44.062819 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:08:44.063653 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:08:44.063780 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:08:44.065863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:08:44.066124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:08:44.069568 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:08:44.074672 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:44.074840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:08:44.078182 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:08:44.080700 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:08:44.083348 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:08:44.084063 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:08:44.084178 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:08:44.089112 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:08:44.092044 augenrules[1429]: No rules Apr 13 20:08:44.092492 systemd[1]: Finished ensure-sysext.service. Apr 13 20:08:44.103969 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 13 20:08:44.109773 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:08:44.123936 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:08:44.122463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:08:44.122613 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:08:44.124352 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:08:44.128323 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 13 20:08:44.128535 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 13 20:08:44.130799 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 13 20:08:44.130959 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 13 20:08:44.132636 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:08:44.140797 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:08:44.154475 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:08:44.154627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:08:44.155448 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:08:44.156378 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:08:44.157088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:08:44.163779 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:08:44.163922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:08:44.164515 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:08:44.169828 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 13 20:08:44.169858 kernel: Console: switching to colour dummy device 80x25 Apr 13 20:08:44.172893 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 13 20:08:44.173089 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 13 20:08:44.173107 kernel: [drm] features: -context_init Apr 13 20:08:44.176304 kernel: [drm] number of scanouts: 1 Apr 13 20:08:44.176337 kernel: [drm] number of cap sets: 0 Apr 13 20:08:44.177991 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 13 20:08:44.183276 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 13 20:08:44.183312 kernel: Console: switching to colour frame buffer device 160x50 Apr 13 20:08:44.190091 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 13 20:08:44.203774 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:08:44.206007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 13 20:08:44.216288 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:08:44.230514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:44.251342 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:44.251507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:44.256310 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:44.270182 systemd-networkd[1398]: lo: Link UP Apr 13 20:08:44.270336 systemd-networkd[1398]: lo: Gained carrier Apr 13 20:08:44.276136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:08:44.280318 systemd-networkd[1398]: Enumeration completed Apr 13 20:08:44.282169 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:08:44.286213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:08:44.286476 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:44.286480 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:44.288857 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:44.288901 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:08:44.289582 systemd-networkd[1398]: eth0: Link UP Apr 13 20:08:44.289628 systemd-networkd[1398]: eth0: Gained carrier Apr 13 20:08:44.289674 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:44.294219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:08:44.294436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:44.296917 systemd-networkd[1398]: eth1: Link UP Apr 13 20:08:44.296925 systemd-networkd[1398]: eth1: Gained carrier Apr 13 20:08:44.296937 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:08:44.299482 systemd-resolved[1399]: Positive Trust Anchors: Apr 13 20:08:44.299496 systemd-resolved[1399]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:08:44.299518 systemd-resolved[1399]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:08:44.303022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:08:44.304053 systemd-resolved[1399]: Using system hostname 'ci-4081-3-7-1-0f1354cb62'. Apr 13 20:08:44.305653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:08:44.305847 systemd[1]: Reached target network.target - Network. Apr 13 20:08:44.305906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:08:44.309172 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 13 20:08:44.310343 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:08:44.332118 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 13 20:08:44.333084 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Apr 13 20:08:44.359256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:08:44.362131 systemd-networkd[1398]: eth0: DHCPv4 address 62.238.1.80/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 13 20:08:44.363271 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Apr 13 20:08:44.404767 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:08:44.413318 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:08:44.428727 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:08:44.464609 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:08:44.467357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:08:44.469165 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:08:44.469580 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:08:44.472013 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:08:44.473681 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:08:44.476414 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:08:44.478036 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:08:44.479488 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:08:44.479594 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:08:44.481251 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:08:44.485265 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:08:44.488223 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:08:44.495338 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:08:44.499202 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:08:44.503925 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:08:44.507505 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:08:44.511091 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:08:44.510860 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:08:44.511590 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:08:44.511651 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:08:44.517192 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:08:44.522658 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:08:44.532298 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:08:44.535553 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:08:44.546280 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:08:44.546866 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:08:44.548571 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:08:44.554127 jq[1483]: false Apr 13 20:08:44.562227 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:08:44.567502 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 13 20:08:44.571210 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:08:44.575777 dbus-daemon[1482]: [system] SELinux support is enabled Apr 13 20:08:44.581221 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:08:44.586310 coreos-metadata[1481]: Apr 13 20:08:44.586 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 13 20:08:44.590216 coreos-metadata[1481]: Apr 13 20:08:44.587 INFO Fetch successful Apr 13 20:08:44.590216 coreos-metadata[1481]: Apr 13 20:08:44.587 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 13 20:08:44.590216 coreos-metadata[1481]: Apr 13 20:08:44.587 INFO Fetch successful Apr 13 20:08:44.590235 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:08:44.594913 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 13 20:08:44.595484 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:08:44.603293 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:08:44.611157 extend-filesystems[1485]: Found loop4 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found loop5 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found loop6 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found loop7 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda1 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda2 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda3 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found usr Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda4 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda6 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda7 Apr 13 20:08:44.611157 extend-filesystems[1485]: Found sda9 Apr 13 20:08:44.611157 extend-filesystems[1485]: Checking size of /dev/sda9 Apr 13 20:08:44.677462 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 13 20:08:44.611202 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:08:44.677562 extend-filesystems[1485]: Resized partition /dev/sda9 Apr 13 20:08:44.615369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:08:44.677906 extend-filesystems[1512]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:08:44.678353 update_engine[1497]: I20260413 20:08:44.655365 1497 main.cc:92] Flatcar Update Engine starting Apr 13 20:08:44.678353 update_engine[1497]: I20260413 20:08:44.663370 1497 update_check_scheduler.cc:74] Next update check in 6m3s Apr 13 20:08:44.626452 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:08:44.646270 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:08:44.646442 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:08:44.678842 jq[1501]: true Apr 13 20:08:44.646741 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:08:44.646917 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:08:44.670516 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:08:44.670690 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:08:44.699938 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:08:44.702956 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:08:44.703390 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:08:44.706300 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:08:44.706321 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:08:44.720223 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:08:44.725555 jq[1516]: true Apr 13 20:08:44.736225 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1338) Apr 13 20:08:44.737537 (ntainerd)[1517]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:08:44.750507 systemd-logind[1494]: New seat seat0. Apr 13 20:08:44.752388 tar[1514]: linux-amd64/LICENSE Apr 13 20:08:44.753026 tar[1514]: linux-amd64/helm Apr 13 20:08:44.761541 systemd-logind[1494]: Watching system buttons on /dev/input/event2 (Power Button) Apr 13 20:08:44.761564 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:08:44.763287 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:08:44.773940 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:08:44.778919 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:08:44.884762 locksmithd[1529]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:08:44.930795 bash[1552]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:08:44.931579 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:08:44.946751 systemd[1]: Starting sshkeys.service... Apr 13 20:08:44.963886 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:08:44.972491 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:08:44.982573 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:08:44.994591 containerd[1517]: time="2026-04-13T20:08:44.994532409Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:08:45.025601 containerd[1517]: time="2026-04-13T20:08:45.024989249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030428289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030449199Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030460969Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030589549Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030599239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030643539Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030651669Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030792939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030803319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030812639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031503 containerd[1517]: time="2026-04-13T20:08:45.030819479Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.031661 containerd[1517]: time="2026-04-13T20:08:45.030893139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.033089 containerd[1517]: time="2026-04-13T20:08:45.032567949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:08:45.033089 containerd[1517]: time="2026-04-13T20:08:45.032667799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:08:45.033089 containerd[1517]: time="2026-04-13T20:08:45.032678389Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:08:45.033089 containerd[1517]: time="2026-04-13T20:08:45.032748589Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:08:45.033089 containerd[1517]: time="2026-04-13T20:08:45.032783189Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:08:45.041473 coreos-metadata[1566]: Apr 13 20:08:45.041 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 13 20:08:45.043929 coreos-metadata[1566]: Apr 13 20:08:45.043 INFO Fetch successful Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055369369Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055405559Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055416859Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055429049Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055439379Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055547589Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055689439Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055767149Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055777219Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055789339Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055799179Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055808349Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055818169Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057648 containerd[1517]: time="2026-04-13T20:08:45.055827219Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055836909Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055846409Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055854889Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055862769Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055876369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055885489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055894869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055904789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055917509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055926259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055935959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055947249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055956269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.057841 containerd[1517]: time="2026-04-13T20:08:45.055966269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.055974729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.055982749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.055990459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056000639Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056018559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056027019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056034449Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056082759Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056094969Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056110599Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056118889Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056125299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056133819Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:08:45.058032 containerd[1517]: time="2026-04-13T20:08:45.056146629Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:08:45.058385 containerd[1517]: time="2026-04-13T20:08:45.056153899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:08:45.058408 containerd[1517]: time="2026-04-13T20:08:45.056331149Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:08:45.058408 containerd[1517]: time="2026-04-13T20:08:45.056367819Z" level=info msg="Connect containerd service" Apr 13 20:08:45.058408 containerd[1517]: time="2026-04-13T20:08:45.056395959Z" level=info msg="using legacy CRI server" Apr 13 20:08:45.058408 containerd[1517]: time="2026-04-13T20:08:45.056400829Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:08:45.058408 containerd[1517]: time="2026-04-13T20:08:45.056455479Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:08:45.058408 containerd[1517]: time="2026-04-13T20:08:45.056850719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:08:45.059498 unknown[1566]: wrote ssh authorized keys file for user: core Apr 13 20:08:45.060605 containerd[1517]: time="2026-04-13T20:08:45.060582929Z" level=info msg="Start subscribing containerd event" Apr 13 20:08:45.060893 containerd[1517]: time="2026-04-13T20:08:45.060882219Z" level=info msg="Start recovering state" Apr 13 20:08:45.060966 containerd[1517]: time="2026-04-13T20:08:45.060957729Z" level=info msg="Start event monitor" Apr 13 20:08:45.061028 containerd[1517]: time="2026-04-13T20:08:45.061019979Z" level=info msg="Start snapshots syncer" Apr 13 20:08:45.094475 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 13 20:08:45.062342 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:08:45.094777 containerd[1517]: time="2026-04-13T20:08:45.061205159Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:08:45.094777 containerd[1517]: time="2026-04-13T20:08:45.061212349Z" level=info msg="Start streaming server" Apr 13 20:08:45.094777 containerd[1517]: time="2026-04-13T20:08:45.061549889Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:08:45.094777 containerd[1517]: time="2026-04-13T20:08:45.061598259Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:08:45.094777 containerd[1517]: time="2026-04-13T20:08:45.061719929Z" level=info msg="containerd successfully booted in 0.068360s" Apr 13 20:08:45.097302 extend-filesystems[1512]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:08:45.097302 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 13 20:08:45.097302 extend-filesystems[1512]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 13 20:08:45.105670 extend-filesystems[1485]: Resized filesystem in /dev/sda9 Apr 13 20:08:45.105670 extend-filesystems[1485]: Found sr0 Apr 13 20:08:45.097622 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:08:45.097823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:08:45.112088 update-ssh-keys[1571]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:08:45.112805 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:08:45.115987 systemd[1]: Finished sshkeys.service. Apr 13 20:08:45.335585 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:08:45.354517 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:08:45.367149 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:08:45.369922 systemd[1]: Started sshd@0-62.238.1.80:22-20.229.252.112:58188.service - OpenSSH per-connection server daemon (20.229.252.112:58188). Apr 13 20:08:45.383350 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:08:45.384027 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:08:45.395460 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:08:45.402992 tar[1514]: linux-amd64/README.md Apr 13 20:08:45.409644 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:08:45.414469 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:08:45.417322 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:08:45.419232 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:08:45.421317 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:08:45.590496 sshd[1586]: Accepted publickey for core from 20.229.252.112 port 58188 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:45.594531 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:45.611687 systemd-logind[1494]: New session 1 of user core. Apr 13 20:08:45.614020 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:08:45.623904 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:08:45.649642 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:08:45.659550 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:08:45.674625 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:08:45.791507 systemd[1601]: Queued start job for default target default.target. Apr 13 20:08:45.801100 systemd[1601]: Created slice app.slice - User Application Slice. Apr 13 20:08:45.801134 systemd[1601]: Reached target paths.target - Paths. Apr 13 20:08:45.801145 systemd[1601]: Reached target timers.target - Timers. Apr 13 20:08:45.802521 systemd[1601]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:08:45.837929 systemd[1601]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:08:45.838322 systemd[1601]: Reached target sockets.target - Sockets. Apr 13 20:08:45.838351 systemd[1601]: Reached target basic.target - Basic System. Apr 13 20:08:45.838414 systemd[1601]: Reached target default.target - Main User Target. Apr 13 20:08:45.838471 systemd[1601]: Startup finished in 150ms. Apr 13 20:08:45.838833 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:08:45.861359 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:08:46.053510 systemd[1]: Started sshd@1-62.238.1.80:22-20.229.252.112:58200.service - OpenSSH per-connection server daemon (20.229.252.112:58200). Apr 13 20:08:46.064666 systemd-networkd[1398]: eth0: Gained IPv6LL Apr 13 20:08:46.066203 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Apr 13 20:08:46.072877 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:08:46.077404 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:08:46.086379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:08:46.095207 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:08:46.133927 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:08:46.270150 sshd[1612]: Accepted publickey for core from 20.229.252.112 port 58200 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:46.272323 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:46.280692 systemd-logind[1494]: New session 2 of user core. Apr 13 20:08:46.286266 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:08:46.320462 systemd-networkd[1398]: eth1: Gained IPv6LL Apr 13 20:08:46.321402 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Apr 13 20:08:46.444446 sshd[1612]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:46.450385 systemd[1]: sshd@1-62.238.1.80:22-20.229.252.112:58200.service: Deactivated successfully. Apr 13 20:08:46.453303 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:08:46.454475 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:08:46.456253 systemd-logind[1494]: Removed session 2. Apr 13 20:08:46.489327 systemd[1]: Started sshd@2-62.238.1.80:22-20.229.252.112:58208.service - OpenSSH per-connection server daemon (20.229.252.112:58208). Apr 13 20:08:46.696221 sshd[1630]: Accepted publickey for core from 20.229.252.112 port 58208 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:46.696391 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:46.702137 systemd-logind[1494]: New session 3 of user core. Apr 13 20:08:46.705369 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:08:46.860275 sshd[1630]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:46.866252 systemd[1]: sshd@2-62.238.1.80:22-20.229.252.112:58208.service: Deactivated successfully. Apr 13 20:08:46.869230 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:08:46.870422 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:08:46.871904 systemd-logind[1494]: Removed session 3. Apr 13 20:08:47.000760 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:08:47.003568 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:08:47.007968 systemd[1]: Startup finished in 1.465s (kernel) + 8.345s (initrd) + 4.956s (userspace) = 14.766s. Apr 13 20:08:47.015986 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:08:47.431699 kubelet[1641]: E0413 20:08:47.431549 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:08:47.435919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:08:47.436364 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:08:56.911859 systemd[1]: Started sshd@3-62.238.1.80:22-20.229.252.112:37044.service - OpenSSH per-connection server daemon (20.229.252.112:37044). Apr 13 20:08:57.124811 sshd[1653]: Accepted publickey for core from 20.229.252.112 port 37044 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:57.127736 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:57.136656 systemd-logind[1494]: New session 4 of user core. Apr 13 20:08:57.150296 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:08:57.301033 sshd[1653]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:57.307899 systemd[1]: sshd@3-62.238.1.80:22-20.229.252.112:37044.service: Deactivated successfully. Apr 13 20:08:57.311488 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:08:57.312580 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:08:57.314404 systemd-logind[1494]: Removed session 4. Apr 13 20:08:57.353924 systemd[1]: Started sshd@4-62.238.1.80:22-20.229.252.112:37056.service - OpenSSH per-connection server daemon (20.229.252.112:37056). Apr 13 20:08:57.529834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:08:57.538417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:08:57.568264 sshd[1660]: Accepted publickey for core from 20.229.252.112 port 37056 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:57.571006 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:57.581392 systemd-logind[1494]: New session 5 of user core. Apr 13 20:08:57.587344 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:08:57.704267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:08:57.722233 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:08:57.733737 sshd[1660]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:57.739612 systemd[1]: sshd@4-62.238.1.80:22-20.229.252.112:37056.service: Deactivated successfully. Apr 13 20:08:57.739919 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:08:57.742332 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:08:57.744387 systemd-logind[1494]: Removed session 5. Apr 13 20:08:57.773092 systemd[1]: Started sshd@5-62.238.1.80:22-20.229.252.112:37068.service - OpenSSH per-connection server daemon (20.229.252.112:37068). Apr 13 20:08:57.775724 kubelet[1671]: E0413 20:08:57.775575 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:08:57.779988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:08:57.781335 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:08:57.975744 sshd[1681]: Accepted publickey for core from 20.229.252.112 port 37068 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:57.977999 sshd[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:57.985737 systemd-logind[1494]: New session 6 of user core. Apr 13 20:08:57.993325 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:08:58.148350 sshd[1681]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:58.153503 systemd[1]: sshd@5-62.238.1.80:22-20.229.252.112:37068.service: Deactivated successfully. Apr 13 20:08:58.157115 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:08:58.160736 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:08:58.162574 systemd-logind[1494]: Removed session 6. Apr 13 20:08:58.198036 systemd[1]: Started sshd@6-62.238.1.80:22-20.229.252.112:37078.service - OpenSSH per-connection server daemon (20.229.252.112:37078). Apr 13 20:08:58.409389 sshd[1689]: Accepted publickey for core from 20.229.252.112 port 37078 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:58.412160 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:58.421186 systemd-logind[1494]: New session 7 of user core. Apr 13 20:08:58.430312 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:08:58.566467 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:08:58.567269 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:08:58.589546 sudo[1692]: pam_unix(sudo:session): session closed for user root Apr 13 20:08:58.622295 sshd[1689]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:58.627719 systemd[1]: sshd@6-62.238.1.80:22-20.229.252.112:37078.service: Deactivated successfully. Apr 13 20:08:58.631463 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:08:58.634204 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:08:58.635942 systemd-logind[1494]: Removed session 7. Apr 13 20:08:58.678470 systemd[1]: Started sshd@7-62.238.1.80:22-20.229.252.112:37094.service - OpenSSH per-connection server daemon (20.229.252.112:37094). Apr 13 20:08:58.889425 sshd[1697]: Accepted publickey for core from 20.229.252.112 port 37094 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:58.890693 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:58.896243 systemd-logind[1494]: New session 8 of user core. Apr 13 20:08:58.905169 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:08:59.033167 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:08:59.034201 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:08:59.041646 sudo[1701]: pam_unix(sudo:session): session closed for user root Apr 13 20:08:59.053778 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:08:59.054609 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:08:59.077883 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:08:59.091305 auditctl[1704]: No rules Apr 13 20:08:59.092517 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:08:59.092911 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:08:59.100750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:08:59.170687 augenrules[1722]: No rules Apr 13 20:08:59.173508 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:08:59.175493 sudo[1700]: pam_unix(sudo:session): session closed for user root Apr 13 20:08:59.207195 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 13 20:08:59.212791 systemd[1]: sshd@7-62.238.1.80:22-20.229.252.112:37094.service: Deactivated successfully. Apr 13 20:08:59.216363 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:08:59.218993 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:08:59.220753 systemd-logind[1494]: Removed session 8. Apr 13 20:08:59.255757 systemd[1]: Started sshd@8-62.238.1.80:22-20.229.252.112:37104.service - OpenSSH per-connection server daemon (20.229.252.112:37104). Apr 13 20:08:59.481344 sshd[1730]: Accepted publickey for core from 20.229.252.112 port 37104 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:08:59.484164 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:08:59.493137 systemd-logind[1494]: New session 9 of user core. Apr 13 20:08:59.499357 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:08:59.627048 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:08:59.627958 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:08:59.920737 (dockerd)[1749]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:08:59.920799 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:09:00.149756 dockerd[1749]: time="2026-04-13T20:09:00.149681428Z" level=info msg="Starting up" Apr 13 20:09:00.207413 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3879376052-merged.mount: Deactivated successfully. Apr 13 20:09:00.240692 dockerd[1749]: time="2026-04-13T20:09:00.240425378Z" level=info msg="Loading containers: start." Apr 13 20:09:00.359107 kernel: Initializing XFRM netlink socket Apr 13 20:09:00.385892 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Apr 13 20:09:00.435872 systemd-networkd[1398]: docker0: Link UP Apr 13 20:09:00.449435 dockerd[1749]: time="2026-04-13T20:09:00.449385908Z" level=info msg="Loading containers: done." Apr 13 20:09:00.464673 dockerd[1749]: time="2026-04-13T20:09:00.464585558Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:09:00.464799 dockerd[1749]: time="2026-04-13T20:09:00.464707408Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:09:00.464833 dockerd[1749]: time="2026-04-13T20:09:00.464814948Z" level=info msg="Daemon has completed initialization" Apr 13 20:09:01.887706 systemd-resolved[1399]: Clock change detected. Flushing caches. Apr 13 20:09:01.888237 systemd-timesyncd[1437]: Contacted time server 141.82.25.203:123 (2.flatcar.pool.ntp.org). Apr 13 20:09:01.888303 systemd-timesyncd[1437]: Initial clock synchronization to Mon 2026-04-13 20:09:01.887643 UTC. Apr 13 20:09:01.897487 dockerd[1749]: time="2026-04-13T20:09:01.897371255Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:09:01.897824 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:09:02.426725 containerd[1517]: time="2026-04-13T20:09:02.426684955Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 13 20:09:02.993018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1554298549.mount: Deactivated successfully. Apr 13 20:09:04.214552 containerd[1517]: time="2026-04-13T20:09:04.214487894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:04.215786 containerd[1517]: time="2026-04-13T20:09:04.215626194Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=27569796" Apr 13 20:09:04.217450 containerd[1517]: time="2026-04-13T20:09:04.216800024Z" level=info msg="ImageCreate event name:\"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:04.219114 containerd[1517]: time="2026-04-13T20:09:04.219080624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:04.220000 containerd[1517]: time="2026-04-13T20:09:04.219793814Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"27566295\" in 1.793072309s" Apr 13 20:09:04.220000 containerd[1517]: time="2026-04-13T20:09:04.219820164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\"" Apr 13 20:09:04.220487 containerd[1517]: time="2026-04-13T20:09:04.220457324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 13 20:09:05.363241 containerd[1517]: time="2026-04-13T20:09:05.363179544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:05.364210 containerd[1517]: time="2026-04-13T20:09:05.364164624Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=21449617" Apr 13 20:09:05.365119 containerd[1517]: time="2026-04-13T20:09:05.364886584Z" level=info msg="ImageCreate event name:\"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:05.367360 containerd[1517]: time="2026-04-13T20:09:05.367337614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:05.368101 containerd[1517]: time="2026-04-13T20:09:05.368083364Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"23014443\" in 1.14759294s" Apr 13 20:09:05.368169 containerd[1517]: time="2026-04-13T20:09:05.368158374Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\"" Apr 13 20:09:05.368767 containerd[1517]: time="2026-04-13T20:09:05.368672514Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 13 20:09:06.356936 containerd[1517]: time="2026-04-13T20:09:06.356883444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:06.358114 containerd[1517]: time="2026-04-13T20:09:06.357915234Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=15548448" Apr 13 20:09:06.359132 containerd[1517]: time="2026-04-13T20:09:06.359081984Z" level=info msg="ImageCreate event name:\"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:06.362434 containerd[1517]: time="2026-04-13T20:09:06.361386564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:06.362434 containerd[1517]: time="2026-04-13T20:09:06.362043954Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"17113292\" in 993.21715ms" Apr 13 20:09:06.362434 containerd[1517]: time="2026-04-13T20:09:06.362067354Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\"" Apr 13 20:09:06.362689 containerd[1517]: time="2026-04-13T20:09:06.362663394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 13 20:09:07.427335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277078937.mount: Deactivated successfully. Apr 13 20:09:07.619263 containerd[1517]: time="2026-04-13T20:09:07.619212924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:07.620493 containerd[1517]: time="2026-04-13T20:09:07.620306354Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=25685349" Apr 13 20:09:07.622250 containerd[1517]: time="2026-04-13T20:09:07.621604354Z" level=info msg="ImageCreate event name:\"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:07.625591 containerd[1517]: time="2026-04-13T20:09:07.625554704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:07.625979 containerd[1517]: time="2026-04-13T20:09:07.625953494Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"25684340\" in 1.26326307s" Apr 13 20:09:07.626009 containerd[1517]: time="2026-04-13T20:09:07.625983534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\"" Apr 13 20:09:07.626459 containerd[1517]: time="2026-04-13T20:09:07.626439574Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 13 20:09:08.166659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927155632.mount: Deactivated successfully. Apr 13 20:09:09.016964 containerd[1517]: time="2026-04-13T20:09:09.016917124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:09.018509 containerd[1517]: time="2026-04-13T20:09:09.018475484Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556642" Apr 13 20:09:09.020637 containerd[1517]: time="2026-04-13T20:09:09.020605564Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:09.024353 containerd[1517]: time="2026-04-13T20:09:09.023538744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:09.024656 containerd[1517]: time="2026-04-13T20:09:09.024472724Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.39800894s" Apr 13 20:09:09.024656 containerd[1517]: time="2026-04-13T20:09:09.024505284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 13 20:09:09.025120 containerd[1517]: time="2026-04-13T20:09:09.025087524Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:09:09.237097 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:09:09.246643 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:09.412287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:09.415963 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:09:09.448817 kubelet[2026]: E0413 20:09:09.448753 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:09:09.454067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:09:09.454237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:09:09.581716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552332255.mount: Deactivated successfully. Apr 13 20:09:09.587646 containerd[1517]: time="2026-04-13T20:09:09.587572534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:09.588905 containerd[1517]: time="2026-04-13T20:09:09.588835534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Apr 13 20:09:09.589989 containerd[1517]: time="2026-04-13T20:09:09.589899614Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:09.596499 containerd[1517]: time="2026-04-13T20:09:09.596391254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:09.598222 containerd[1517]: time="2026-04-13T20:09:09.598033874Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 572.91586ms" Apr 13 20:09:09.598222 containerd[1517]: time="2026-04-13T20:09:09.598091784Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:09:09.599365 containerd[1517]: time="2026-04-13T20:09:09.598910954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 13 20:09:10.187860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491085545.mount: Deactivated successfully. Apr 13 20:09:10.936840 containerd[1517]: time="2026-04-13T20:09:10.936788204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.937793 containerd[1517]: time="2026-04-13T20:09:10.937761594Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643961" Apr 13 20:09:10.938612 containerd[1517]: time="2026-04-13T20:09:10.938559024Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.941052 containerd[1517]: time="2026-04-13T20:09:10.940269754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:10.941052 containerd[1517]: time="2026-04-13T20:09:10.940960874Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.34201219s" Apr 13 20:09:10.941052 containerd[1517]: time="2026-04-13T20:09:10.940981404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 13 20:09:11.856698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:11.862707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:11.895395 systemd[1]: Reloading requested from client PID 2125 ('systemctl') (unit session-9.scope)... Apr 13 20:09:11.895407 systemd[1]: Reloading... Apr 13 20:09:11.997351 zram_generator::config[2164]: No configuration found. Apr 13 20:09:12.089185 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:12.150609 systemd[1]: Reloading finished in 254 ms. Apr 13 20:09:12.194178 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:09:12.194266 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:09:12.194576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:12.199540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:12.350310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:12.354244 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:12.400354 kubelet[2217]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:12.512371 kubelet[2217]: I0413 20:09:12.512230 2217 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 20:09:12.512371 kubelet[2217]: I0413 20:09:12.512263 2217 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:12.512371 kubelet[2217]: I0413 20:09:12.512279 2217 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:12.512371 kubelet[2217]: I0413 20:09:12.512283 2217 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:12.514371 kubelet[2217]: I0413 20:09:12.512707 2217 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 20:09:12.519161 kubelet[2217]: I0413 20:09:12.519129 2217 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:12.521469 kubelet[2217]: E0413 20:09:12.521418 2217 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://62.238.1.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 62.238.1.80:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:09:12.523044 kubelet[2217]: E0413 20:09:12.522995 2217 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:12.523044 kubelet[2217]: I0413 20:09:12.523050 2217 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:12.525879 kubelet[2217]: I0413 20:09:12.525854 2217 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:12.526606 kubelet[2217]: I0413 20:09:12.526552 2217 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:12.526692 kubelet[2217]: I0413 20:09:12.526580 2217 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-1-0f1354cb62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:12.526692 kubelet[2217]: I0413 20:09:12.526688 2217 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 20:09:12.526692 kubelet[2217]: I0413 20:09:12.526694 2217 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 20:09:12.526903 kubelet[2217]: I0413 20:09:12.526773 2217 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:12.529021 kubelet[2217]: I0413 20:09:12.528993 2217 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 20:09:12.529144 kubelet[2217]: I0413 20:09:12.529121 2217 kubelet.go:482] "Attempting to sync node with API server" Apr 13 20:09:12.529144 kubelet[2217]: I0413 20:09:12.529137 2217 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:12.529235 kubelet[2217]: I0413 20:09:12.529158 2217 kubelet.go:394] "Adding apiserver pod source" Apr 13 20:09:12.529235 kubelet[2217]: I0413 20:09:12.529166 2217 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:12.532080 kubelet[2217]: I0413 20:09:12.531751 2217 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:12.533307 kubelet[2217]: I0413 20:09:12.533264 2217 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:12.533307 kubelet[2217]: I0413 20:09:12.533286 2217 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:12.533497 kubelet[2217]: W0413 20:09:12.533350 2217 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:09:12.536041 kubelet[2217]: I0413 20:09:12.535713 2217 server.go:1257] "Started kubelet" Apr 13 20:09:12.543342 kubelet[2217]: I0413 20:09:12.542505 2217 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 20:09:12.545477 kubelet[2217]: I0413 20:09:12.545456 2217 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:12.546361 kubelet[2217]: I0413 20:09:12.546289 2217 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:09:12.552362 kubelet[2217]: E0413 20:09:12.551302 2217 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://62.238.1.80:6443/api/v1/namespaces/default/events\": dial tcp 62.238.1.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-1-0f1354cb62.18a6038047921f82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-1-0f1354cb62,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-1-0f1354cb62,},FirstTimestamp:2026-04-13 20:09:12.535695234 +0000 UTC m=+0.178556221,LastTimestamp:2026-04-13 20:09:12.535695234 +0000 UTC m=+0.178556221,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-1-0f1354cb62,}" Apr 13 20:09:12.553333 kubelet[2217]: I0413 20:09:12.552633 2217 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:12.554598 kubelet[2217]: I0413 20:09:12.554589 2217 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 20:09:12.554742 kubelet[2217]: E0413 20:09:12.554733 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:12.555199 kubelet[2217]: I0413 20:09:12.555189 2217 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:12.555273 kubelet[2217]: I0413 20:09:12.555266 2217 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:12.555799 kubelet[2217]: I0413 20:09:12.555788 2217 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:12.555901 kubelet[2217]: I0413 20:09:12.555890 2217 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:12.556150 kubelet[2217]: I0413 20:09:12.556120 2217 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:12.556225 kubelet[2217]: I0413 20:09:12.556216 2217 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:12.556415 kubelet[2217]: I0413 20:09:12.556406 2217 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:12.557290 kubelet[2217]: E0413 20:09:12.557260 2217 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://62.238.1.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-1-0f1354cb62?timeout=10s\": dial tcp 62.238.1.80:6443: connect: connection refused" interval="200ms" Apr 13 20:09:12.558140 kubelet[2217]: I0413 20:09:12.558129 2217 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:12.576274 kubelet[2217]: I0413 20:09:12.576245 2217 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:12.577714 kubelet[2217]: I0413 20:09:12.577452 2217 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:12.577714 kubelet[2217]: I0413 20:09:12.577465 2217 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 20:09:12.577714 kubelet[2217]: I0413 20:09:12.577505 2217 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 20:09:12.577714 kubelet[2217]: E0413 20:09:12.577543 2217 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:12.587549 kubelet[2217]: I0413 20:09:12.587474 2217 cpu_manager.go:225] "Starting" policy="none" Apr 13 20:09:12.587549 kubelet[2217]: I0413 20:09:12.587482 2217 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:12.587549 kubelet[2217]: I0413 20:09:12.587494 2217 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 20:09:12.589965 kubelet[2217]: I0413 20:09:12.589790 2217 policy_none.go:50] "Start" Apr 13 20:09:12.589965 kubelet[2217]: I0413 20:09:12.589805 2217 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:12.589965 kubelet[2217]: I0413 20:09:12.589814 2217 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:12.590720 kubelet[2217]: I0413 20:09:12.590710 2217 policy_none.go:44] "Start" Apr 13 20:09:12.594369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:09:12.605743 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:09:12.608533 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:09:12.620223 kubelet[2217]: E0413 20:09:12.620199 2217 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:12.620671 kubelet[2217]: I0413 20:09:12.620611 2217 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 20:09:12.620671 kubelet[2217]: I0413 20:09:12.620621 2217 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:12.620954 kubelet[2217]: I0413 20:09:12.620894 2217 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 20:09:12.622001 kubelet[2217]: E0413 20:09:12.621975 2217 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:12.622057 kubelet[2217]: E0413 20:09:12.622050 2217 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:12.691621 systemd[1]: Created slice kubepods-burstable-pod74eb50dddacce6ae832ce90b59b1a89b.slice - libcontainer container kubepods-burstable-pod74eb50dddacce6ae832ce90b59b1a89b.slice. Apr 13 20:09:12.704728 kubelet[2217]: E0413 20:09:12.704561 2217 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.708467 systemd[1]: Created slice kubepods-burstable-podaa60da72955918847e83a2434b86896d.slice - libcontainer container kubepods-burstable-podaa60da72955918847e83a2434b86896d.slice. Apr 13 20:09:12.717980 kubelet[2217]: E0413 20:09:12.717946 2217 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.724698 kubelet[2217]: I0413 20:09:12.724657 2217 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.724929 systemd[1]: Created slice kubepods-burstable-pod85fa92a3fc0e9cce549d948476decd47.slice - libcontainer container kubepods-burstable-pod85fa92a3fc0e9cce549d948476decd47.slice. Apr 13 20:09:12.725366 kubelet[2217]: E0413 20:09:12.725188 2217 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://62.238.1.80:6443/api/v1/nodes\": dial tcp 62.238.1.80:6443: connect: connection refused" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.728410 kubelet[2217]: E0413 20:09:12.728098 2217 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757030 kubelet[2217]: I0413 20:09:12.756883 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74eb50dddacce6ae832ce90b59b1a89b-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" (UID: \"74eb50dddacce6ae832ce90b59b1a89b\") " pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757030 kubelet[2217]: I0413 20:09:12.757008 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74eb50dddacce6ae832ce90b59b1a89b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" (UID: \"74eb50dddacce6ae832ce90b59b1a89b\") " pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757030 kubelet[2217]: I0413 20:09:12.757035 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74eb50dddacce6ae832ce90b59b1a89b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" (UID: \"74eb50dddacce6ae832ce90b59b1a89b\") " pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757257 kubelet[2217]: I0413 20:09:12.757064 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757257 kubelet[2217]: I0413 20:09:12.757106 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757257 kubelet[2217]: I0413 20:09:12.757144 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85fa92a3fc0e9cce549d948476decd47-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-1-0f1354cb62\" (UID: \"85fa92a3fc0e9cce549d948476decd47\") " pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757257 kubelet[2217]: I0413 20:09:12.757178 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757257 kubelet[2217]: I0413 20:09:12.757198 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.757516 kubelet[2217]: I0413 20:09:12.757233 2217 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.758465 kubelet[2217]: E0413 20:09:12.758373 2217 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://62.238.1.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-1-0f1354cb62?timeout=10s\": dial tcp 62.238.1.80:6443: connect: connection refused" interval="400ms" Apr 13 20:09:12.929187 kubelet[2217]: I0413 20:09:12.928997 2217 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:12.929563 kubelet[2217]: E0413 20:09:12.929503 2217 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://62.238.1.80:6443/api/v1/nodes\": dial tcp 62.238.1.80:6443: connect: connection refused" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:13.009770 containerd[1517]: time="2026-04-13T20:09:13.009311584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-1-0f1354cb62,Uid:74eb50dddacce6ae832ce90b59b1a89b,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:13.026305 containerd[1517]: time="2026-04-13T20:09:13.026235124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-1-0f1354cb62,Uid:aa60da72955918847e83a2434b86896d,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:13.034209 containerd[1517]: time="2026-04-13T20:09:13.034130104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-1-0f1354cb62,Uid:85fa92a3fc0e9cce549d948476decd47,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:13.160370 kubelet[2217]: E0413 20:09:13.159031 2217 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://62.238.1.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-1-0f1354cb62?timeout=10s\": dial tcp 62.238.1.80:6443: connect: connection refused" interval="800ms" Apr 13 20:09:13.332546 kubelet[2217]: I0413 20:09:13.332419 2217 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:13.333341 kubelet[2217]: E0413 20:09:13.332846 2217 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://62.238.1.80:6443/api/v1/nodes\": dial tcp 62.238.1.80:6443: connect: connection refused" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:13.521629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551403675.mount: Deactivated successfully. Apr 13 20:09:13.527146 containerd[1517]: time="2026-04-13T20:09:13.527055734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:13.529542 containerd[1517]: time="2026-04-13T20:09:13.529470114Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:13.530650 containerd[1517]: time="2026-04-13T20:09:13.530593974Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:13.531845 containerd[1517]: time="2026-04-13T20:09:13.531761144Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:13.534025 containerd[1517]: time="2026-04-13T20:09:13.533959104Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 13 20:09:13.535778 containerd[1517]: time="2026-04-13T20:09:13.535474694Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:09:13.535778 containerd[1517]: time="2026-04-13T20:09:13.535576954Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:13.539193 containerd[1517]: time="2026-04-13T20:09:13.539132684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:09:13.540802 containerd[1517]: time="2026-04-13T20:09:13.540493044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 513.96496ms" Apr 13 20:09:13.543462 containerd[1517]: time="2026-04-13T20:09:13.543410184Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 509.19699ms" Apr 13 20:09:13.544551 containerd[1517]: time="2026-04-13T20:09:13.544498814Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 535.03175ms" Apr 13 20:09:13.678090 containerd[1517]: time="2026-04-13T20:09:13.677553194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:13.678090 containerd[1517]: time="2026-04-13T20:09:13.677588734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:13.678090 containerd[1517]: time="2026-04-13T20:09:13.677617484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:13.678883 containerd[1517]: time="2026-04-13T20:09:13.677689854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:13.681827 containerd[1517]: time="2026-04-13T20:09:13.681265304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:13.681827 containerd[1517]: time="2026-04-13T20:09:13.681310994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:13.681827 containerd[1517]: time="2026-04-13T20:09:13.681337184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:13.681827 containerd[1517]: time="2026-04-13T20:09:13.681416634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:13.688150 containerd[1517]: time="2026-04-13T20:09:13.687686324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:13.688150 containerd[1517]: time="2026-04-13T20:09:13.687731774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:13.688150 containerd[1517]: time="2026-04-13T20:09:13.687741194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:13.688150 containerd[1517]: time="2026-04-13T20:09:13.687793764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:13.703598 systemd[1]: Started cri-containerd-6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8.scope - libcontainer container 6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8. Apr 13 20:09:13.706744 systemd[1]: Started cri-containerd-f4ce9c9f00deafec76c7c1d52f15316849b396201d7c6b5797f2b6a6a63b5796.scope - libcontainer container f4ce9c9f00deafec76c7c1d52f15316849b396201d7c6b5797f2b6a6a63b5796. Apr 13 20:09:13.710480 systemd[1]: Started cri-containerd-de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4.scope - libcontainer container de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4. Apr 13 20:09:13.757057 containerd[1517]: time="2026-04-13T20:09:13.756914974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-1-0f1354cb62,Uid:74eb50dddacce6ae832ce90b59b1a89b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4ce9c9f00deafec76c7c1d52f15316849b396201d7c6b5797f2b6a6a63b5796\"" Apr 13 20:09:13.763369 containerd[1517]: time="2026-04-13T20:09:13.762572564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-1-0f1354cb62,Uid:aa60da72955918847e83a2434b86896d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8\"" Apr 13 20:09:13.765760 containerd[1517]: time="2026-04-13T20:09:13.765734144Z" level=info msg="CreateContainer within sandbox \"f4ce9c9f00deafec76c7c1d52f15316849b396201d7c6b5797f2b6a6a63b5796\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:09:13.767387 containerd[1517]: time="2026-04-13T20:09:13.767367754Z" level=info msg="CreateContainer within sandbox \"6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:09:13.771378 containerd[1517]: time="2026-04-13T20:09:13.771355534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-1-0f1354cb62,Uid:85fa92a3fc0e9cce549d948476decd47,Namespace:kube-system,Attempt:0,} returns sandbox id \"de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4\"" Apr 13 20:09:13.774735 containerd[1517]: time="2026-04-13T20:09:13.774718664Z" level=info msg="CreateContainer within sandbox \"de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:09:13.790585 containerd[1517]: time="2026-04-13T20:09:13.790564314Z" level=info msg="CreateContainer within sandbox \"f4ce9c9f00deafec76c7c1d52f15316849b396201d7c6b5797f2b6a6a63b5796\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73630486bcbebb0c37a19859843c28273b06f45a5fdaee8e15caee6b1277e9af\"" Apr 13 20:09:13.791363 containerd[1517]: time="2026-04-13T20:09:13.791347694Z" level=info msg="StartContainer for \"73630486bcbebb0c37a19859843c28273b06f45a5fdaee8e15caee6b1277e9af\"" Apr 13 20:09:13.795102 containerd[1517]: time="2026-04-13T20:09:13.795075374Z" level=info msg="CreateContainer within sandbox \"6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba\"" Apr 13 20:09:13.795593 containerd[1517]: time="2026-04-13T20:09:13.795571294Z" level=info msg="StartContainer for \"2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba\"" Apr 13 20:09:13.796727 containerd[1517]: time="2026-04-13T20:09:13.796680274Z" level=info msg="CreateContainer within sandbox \"de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73\"" Apr 13 20:09:13.797519 containerd[1517]: time="2026-04-13T20:09:13.797494834Z" level=info msg="StartContainer for \"78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73\"" Apr 13 20:09:13.817426 systemd[1]: Started cri-containerd-73630486bcbebb0c37a19859843c28273b06f45a5fdaee8e15caee6b1277e9af.scope - libcontainer container 73630486bcbebb0c37a19859843c28273b06f45a5fdaee8e15caee6b1277e9af. Apr 13 20:09:13.827448 systemd[1]: Started cri-containerd-78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73.scope - libcontainer container 78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73. Apr 13 20:09:13.831039 systemd[1]: Started cri-containerd-2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba.scope - libcontainer container 2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba. Apr 13 20:09:13.879657 containerd[1517]: time="2026-04-13T20:09:13.879621954Z" level=info msg="StartContainer for \"73630486bcbebb0c37a19859843c28273b06f45a5fdaee8e15caee6b1277e9af\" returns successfully" Apr 13 20:09:13.879845 containerd[1517]: time="2026-04-13T20:09:13.879817784Z" level=info msg="StartContainer for \"2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba\" returns successfully" Apr 13 20:09:13.897413 containerd[1517]: time="2026-04-13T20:09:13.897383254Z" level=info msg="StartContainer for \"78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73\" returns successfully" Apr 13 20:09:14.136835 kubelet[2217]: I0413 20:09:14.136805 2217 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:14.535198 kubelet[2217]: E0413 20:09:14.535163 2217 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:14.596952 kubelet[2217]: E0413 20:09:14.596930 2217 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:14.598048 kubelet[2217]: E0413 20:09:14.597914 2217 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:14.598848 kubelet[2217]: E0413 20:09:14.598822 2217 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:14.618757 kubelet[2217]: I0413 20:09:14.618687 2217 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:14.618757 kubelet[2217]: E0413 20:09:14.618734 2217 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-1-0f1354cb62\": node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:14.624733 kubelet[2217]: E0413 20:09:14.624686 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:14.725202 kubelet[2217]: E0413 20:09:14.725157 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:14.825875 kubelet[2217]: E0413 20:09:14.825728 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:14.926660 kubelet[2217]: E0413 20:09:14.926569 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.027063 kubelet[2217]: E0413 20:09:15.027005 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.127871 kubelet[2217]: E0413 20:09:15.127701 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.228746 kubelet[2217]: E0413 20:09:15.228678 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.329605 kubelet[2217]: E0413 20:09:15.329531 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.430723 kubelet[2217]: E0413 20:09:15.430561 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.531878 kubelet[2217]: E0413 20:09:15.531734 2217 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-1-0f1354cb62\" not found" Apr 13 20:09:15.600854 kubelet[2217]: I0413 20:09:15.600160 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:15.600854 kubelet[2217]: I0413 20:09:15.600695 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:15.656838 kubelet[2217]: I0413 20:09:15.656134 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:15.666171 kubelet[2217]: E0413 20:09:15.665894 2217 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:15.666171 kubelet[2217]: I0413 20:09:15.665926 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:15.672730 kubelet[2217]: I0413 20:09:15.672673 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:15.678507 kubelet[2217]: E0413 20:09:15.678463 2217 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:16.533604 kubelet[2217]: I0413 20:09:16.533519 2217 apiserver.go:52] "Watching apiserver" Apr 13 20:09:16.556354 kubelet[2217]: I0413 20:09:16.556250 2217 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:16.602510 kubelet[2217]: I0413 20:09:16.602237 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:16.602960 kubelet[2217]: I0413 20:09:16.602709 2217 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:16.609228 kubelet[2217]: E0413 20:09:16.609187 2217 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:16.610277 kubelet[2217]: E0413 20:09:16.610198 2217 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:16.814042 systemd[1]: Reloading requested from client PID 2503 ('systemctl') (unit session-9.scope)... Apr 13 20:09:16.814057 systemd[1]: Reloading... Apr 13 20:09:16.900381 zram_generator::config[2558]: No configuration found. Apr 13 20:09:16.968755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:09:17.038694 systemd[1]: Reloading finished in 223 ms. Apr 13 20:09:17.076528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:17.099588 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:09:17.099844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:17.104687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:09:17.221945 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:09:17.231654 (kubelet)[2594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:09:17.269349 kubelet[2594]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:09:17.275014 kubelet[2594]: I0413 20:09:17.274983 2594 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 20:09:17.275127 kubelet[2594]: I0413 20:09:17.275118 2594 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:09:17.275180 kubelet[2594]: I0413 20:09:17.275172 2594 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:09:17.275213 kubelet[2594]: I0413 20:09:17.275205 2594 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:09:17.275518 kubelet[2594]: I0413 20:09:17.275502 2594 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 20:09:17.276637 kubelet[2594]: I0413 20:09:17.276622 2594 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:09:17.278548 kubelet[2594]: I0413 20:09:17.278532 2594 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:09:17.280740 kubelet[2594]: E0413 20:09:17.280662 2594 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:09:17.280740 kubelet[2594]: I0413 20:09:17.280692 2594 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:09:17.283945 kubelet[2594]: I0413 20:09:17.283926 2594 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:09:17.284138 kubelet[2594]: I0413 20:09:17.284115 2594 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:09:17.284252 kubelet[2594]: I0413 20:09:17.284132 2594 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-1-0f1354cb62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:09:17.284252 kubelet[2594]: I0413 20:09:17.284249 2594 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 20:09:17.284386 kubelet[2594]: I0413 20:09:17.284255 2594 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 20:09:17.284386 kubelet[2594]: I0413 20:09:17.284274 2594 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:09:17.284434 kubelet[2594]: I0413 20:09:17.284419 2594 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 20:09:17.284627 kubelet[2594]: I0413 20:09:17.284599 2594 kubelet.go:482] "Attempting to sync node with API server" Apr 13 20:09:17.284627 kubelet[2594]: I0413 20:09:17.284617 2594 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:09:17.284627 kubelet[2594]: I0413 20:09:17.284628 2594 kubelet.go:394] "Adding apiserver pod source" Apr 13 20:09:17.284731 kubelet[2594]: I0413 20:09:17.284634 2594 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:09:17.292346 kubelet[2594]: I0413 20:09:17.291088 2594 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:09:17.293623 kubelet[2594]: I0413 20:09:17.293080 2594 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:09:17.293623 kubelet[2594]: I0413 20:09:17.293099 2594 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:09:17.296002 kubelet[2594]: I0413 20:09:17.295917 2594 server.go:1257] "Started kubelet" Apr 13 20:09:17.298224 kubelet[2594]: I0413 20:09:17.298195 2594 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 20:09:17.301494 kubelet[2594]: I0413 20:09:17.301471 2594 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:09:17.302439 kubelet[2594]: I0413 20:09:17.302426 2594 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:09:17.305376 kubelet[2594]: I0413 20:09:17.305009 2594 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:09:17.306360 kubelet[2594]: I0413 20:09:17.306000 2594 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:09:17.306360 kubelet[2594]: I0413 20:09:17.306053 2594 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:09:17.306360 kubelet[2594]: I0413 20:09:17.306204 2594 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:09:17.307137 kubelet[2594]: I0413 20:09:17.307027 2594 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:09:17.308677 kubelet[2594]: I0413 20:09:17.308303 2594 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 20:09:17.309773 kubelet[2594]: E0413 20:09:17.309761 2594 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:09:17.310468 kubelet[2594]: I0413 20:09:17.310455 2594 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:09:17.310583 kubelet[2594]: I0413 20:09:17.310572 2594 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:09:17.311657 kubelet[2594]: I0413 20:09:17.311649 2594 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:09:17.313882 kubelet[2594]: I0413 20:09:17.313870 2594 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:09:17.314737 kubelet[2594]: I0413 20:09:17.314727 2594 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:09:17.316499 kubelet[2594]: I0413 20:09:17.316487 2594 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:09:17.316731 kubelet[2594]: I0413 20:09:17.316546 2594 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 20:09:17.316731 kubelet[2594]: I0413 20:09:17.316560 2594 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 20:09:17.316731 kubelet[2594]: E0413 20:09:17.316593 2594 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.355891 2594 cpu_manager.go:225] "Starting" policy="none" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356120 2594 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356148 2594 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356289 2594 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356301 2594 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356343 2594 policy_none.go:50] "Start" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356352 2594 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356364 2594 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356500 2594 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:09:17.357142 kubelet[2594]: I0413 20:09:17.356513 2594 policy_none.go:44] "Start" Apr 13 20:09:17.361366 kubelet[2594]: E0413 20:09:17.361349 2594 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:09:17.361507 kubelet[2594]: I0413 20:09:17.361488 2594 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 20:09:17.361562 kubelet[2594]: I0413 20:09:17.361499 2594 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:09:17.362162 kubelet[2594]: I0413 20:09:17.361721 2594 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 20:09:17.364181 kubelet[2594]: E0413 20:09:17.364122 2594 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:09:17.417953 kubelet[2594]: I0413 20:09:17.417903 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.418363 kubelet[2594]: I0413 20:09:17.417911 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.418688 kubelet[2594]: I0413 20:09:17.418610 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.425203 kubelet[2594]: E0413 20:09:17.425172 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.426271 kubelet[2594]: E0413 20:09:17.426220 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.426546 kubelet[2594]: E0413 20:09:17.426516 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.468016 kubelet[2594]: I0413 20:09:17.467959 2594 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.475861 kubelet[2594]: I0413 20:09:17.475566 2594 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.475861 kubelet[2594]: I0413 20:09:17.475646 2594 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515564 kubelet[2594]: I0413 20:09:17.515467 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74eb50dddacce6ae832ce90b59b1a89b-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" (UID: \"74eb50dddacce6ae832ce90b59b1a89b\") " pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515564 kubelet[2594]: I0413 20:09:17.515521 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515564 kubelet[2594]: I0413 20:09:17.515562 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515807 kubelet[2594]: I0413 20:09:17.515587 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74eb50dddacce6ae832ce90b59b1a89b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" (UID: \"74eb50dddacce6ae832ce90b59b1a89b\") " pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515807 kubelet[2594]: I0413 20:09:17.515610 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74eb50dddacce6ae832ce90b59b1a89b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" (UID: \"74eb50dddacce6ae832ce90b59b1a89b\") " pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515807 kubelet[2594]: I0413 20:09:17.515632 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515807 kubelet[2594]: I0413 20:09:17.515669 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515807 kubelet[2594]: I0413 20:09:17.515690 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa60da72955918847e83a2434b86896d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" (UID: \"aa60da72955918847e83a2434b86896d\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:17.515971 kubelet[2594]: I0413 20:09:17.515711 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85fa92a3fc0e9cce549d948476decd47-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-1-0f1354cb62\" (UID: \"85fa92a3fc0e9cce549d948476decd47\") " pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:18.285346 kubelet[2594]: I0413 20:09:18.285291 2594 apiserver.go:52] "Watching apiserver" Apr 13 20:09:18.312801 kubelet[2594]: I0413 20:09:18.312751 2594 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:09:18.340059 kubelet[2594]: I0413 20:09:18.340012 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:18.342389 kubelet[2594]: I0413 20:09:18.340794 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:18.342389 kubelet[2594]: I0413 20:09:18.341198 2594 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:18.351647 kubelet[2594]: E0413 20:09:18.351600 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:18.352441 kubelet[2594]: E0413 20:09:18.352375 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:18.354541 kubelet[2594]: E0413 20:09:18.354497 2594 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-1-0f1354cb62\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:19.391299 kubelet[2594]: I0413 20:09:19.391228 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-1-0f1354cb62" podStartSLOduration=4.391215623 podStartE2EDuration="4.391215623s" podCreationTimestamp="2026-04-13 20:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:19.390114183 +0000 UTC m=+2.154704471" watchObservedRunningTime="2026-04-13 20:09:19.391215623 +0000 UTC m=+2.155805911" Apr 13 20:09:19.409992 kubelet[2594]: I0413 20:09:19.409941 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-1-0f1354cb62" podStartSLOduration=4.409929253 podStartE2EDuration="4.409929253s" podCreationTimestamp="2026-04-13 20:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:19.402262083 +0000 UTC m=+2.166852341" watchObservedRunningTime="2026-04-13 20:09:19.409929253 +0000 UTC m=+2.174519511" Apr 13 20:09:19.411442 kubelet[2594]: I0413 20:09:19.411416 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-1-0f1354cb62" podStartSLOduration=4.411395533 podStartE2EDuration="4.411395533s" podCreationTimestamp="2026-04-13 20:09:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:19.409353963 +0000 UTC m=+2.173944221" watchObservedRunningTime="2026-04-13 20:09:19.411395533 +0000 UTC m=+2.175985801" Apr 13 20:09:23.109755 kubelet[2594]: I0413 20:09:23.109694 2594 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:09:23.111077 kubelet[2594]: I0413 20:09:23.110855 2594 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:09:23.111209 containerd[1517]: time="2026-04-13T20:09:23.110290713Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:09:24.226404 systemd[1]: Created slice kubepods-besteffort-pod6cd90924_848d_494d_952a_251a28953a65.slice - libcontainer container kubepods-besteffort-pod6cd90924_848d_494d_952a_251a28953a65.slice. Apr 13 20:09:24.259772 kubelet[2594]: I0413 20:09:24.259672 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cd90924-848d-494d-952a-251a28953a65-xtables-lock\") pod \"kube-proxy-sm6fn\" (UID: \"6cd90924-848d-494d-952a-251a28953a65\") " pod="kube-system/kube-proxy-sm6fn" Apr 13 20:09:24.259772 kubelet[2594]: I0413 20:09:24.259726 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cd90924-848d-494d-952a-251a28953a65-lib-modules\") pod \"kube-proxy-sm6fn\" (UID: \"6cd90924-848d-494d-952a-251a28953a65\") " pod="kube-system/kube-proxy-sm6fn" Apr 13 20:09:24.259772 kubelet[2594]: I0413 20:09:24.259754 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6cd90924-848d-494d-952a-251a28953a65-kube-proxy\") pod \"kube-proxy-sm6fn\" (UID: \"6cd90924-848d-494d-952a-251a28953a65\") " pod="kube-system/kube-proxy-sm6fn" Apr 13 20:09:24.259772 kubelet[2594]: I0413 20:09:24.259775 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlf8t\" (UniqueName: \"kubernetes.io/projected/6cd90924-848d-494d-952a-251a28953a65-kube-api-access-hlf8t\") pod \"kube-proxy-sm6fn\" (UID: \"6cd90924-848d-494d-952a-251a28953a65\") " pod="kube-system/kube-proxy-sm6fn" Apr 13 20:09:24.343155 systemd[1]: Created slice kubepods-besteffort-podbda35074_8f5b_470f_905b_b2c148192419.slice - libcontainer container kubepods-besteffort-podbda35074_8f5b_470f_905b_b2c148192419.slice. Apr 13 20:09:24.361750 kubelet[2594]: I0413 20:09:24.360701 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bda35074-8f5b-470f-905b-b2c148192419-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-fn9gz\" (UID: \"bda35074-8f5b-470f-905b-b2c148192419\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fn9gz" Apr 13 20:09:24.361750 kubelet[2594]: I0413 20:09:24.360755 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7dlx\" (UniqueName: \"kubernetes.io/projected/bda35074-8f5b-470f-905b-b2c148192419-kube-api-access-f7dlx\") pod \"tigera-operator-6cf4cccc57-fn9gz\" (UID: \"bda35074-8f5b-470f-905b-b2c148192419\") " pod="tigera-operator/tigera-operator-6cf4cccc57-fn9gz" Apr 13 20:09:24.538182 containerd[1517]: time="2026-04-13T20:09:24.538097413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sm6fn,Uid:6cd90924-848d-494d-952a-251a28953a65,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:24.576099 containerd[1517]: time="2026-04-13T20:09:24.575953293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:24.576099 containerd[1517]: time="2026-04-13T20:09:24.576026173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:24.576099 containerd[1517]: time="2026-04-13T20:09:24.576052473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:24.576444 containerd[1517]: time="2026-04-13T20:09:24.576250403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:24.618557 systemd[1]: Started cri-containerd-f4a1e0c41e5069b5421aed714a291b0a8552603a6812846ea10bbe1cc2ae2c68.scope - libcontainer container f4a1e0c41e5069b5421aed714a291b0a8552603a6812846ea10bbe1cc2ae2c68. Apr 13 20:09:24.649191 containerd[1517]: time="2026-04-13T20:09:24.648927673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fn9gz,Uid:bda35074-8f5b-470f-905b-b2c148192419,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:09:24.650975 containerd[1517]: time="2026-04-13T20:09:24.650945513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sm6fn,Uid:6cd90924-848d-494d-952a-251a28953a65,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4a1e0c41e5069b5421aed714a291b0a8552603a6812846ea10bbe1cc2ae2c68\"" Apr 13 20:09:24.656451 containerd[1517]: time="2026-04-13T20:09:24.656373033Z" level=info msg="CreateContainer within sandbox \"f4a1e0c41e5069b5421aed714a291b0a8552603a6812846ea10bbe1cc2ae2c68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:09:24.675142 containerd[1517]: time="2026-04-13T20:09:24.674874933Z" level=info msg="CreateContainer within sandbox \"f4a1e0c41e5069b5421aed714a291b0a8552603a6812846ea10bbe1cc2ae2c68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0bea530a04832fd55e65c641531fd1491752eabb6d62954189f04b93f7fff73\"" Apr 13 20:09:24.676729 containerd[1517]: time="2026-04-13T20:09:24.676589923Z" level=info msg="StartContainer for \"d0bea530a04832fd55e65c641531fd1491752eabb6d62954189f04b93f7fff73\"" Apr 13 20:09:24.678648 containerd[1517]: time="2026-04-13T20:09:24.678541353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:24.678648 containerd[1517]: time="2026-04-13T20:09:24.678599953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:24.678648 containerd[1517]: time="2026-04-13T20:09:24.678607583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:24.679282 containerd[1517]: time="2026-04-13T20:09:24.678838903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:24.696457 systemd[1]: Started cri-containerd-f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578.scope - libcontainer container f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578. Apr 13 20:09:24.701774 systemd[1]: Started cri-containerd-d0bea530a04832fd55e65c641531fd1491752eabb6d62954189f04b93f7fff73.scope - libcontainer container d0bea530a04832fd55e65c641531fd1491752eabb6d62954189f04b93f7fff73. Apr 13 20:09:24.730041 containerd[1517]: time="2026-04-13T20:09:24.729679743Z" level=info msg="StartContainer for \"d0bea530a04832fd55e65c641531fd1491752eabb6d62954189f04b93f7fff73\" returns successfully" Apr 13 20:09:24.737419 containerd[1517]: time="2026-04-13T20:09:24.737384013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-fn9gz,Uid:bda35074-8f5b-470f-905b-b2c148192419,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578\"" Apr 13 20:09:24.739552 containerd[1517]: time="2026-04-13T20:09:24.739379543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:09:26.163578 kubelet[2594]: I0413 20:09:26.163462 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-sm6fn" podStartSLOduration=2.163438762 podStartE2EDuration="2.163438762s" podCreationTimestamp="2026-04-13 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:09:25.366769432 +0000 UTC m=+8.131359730" watchObservedRunningTime="2026-04-13 20:09:26.163438762 +0000 UTC m=+8.928029070" Apr 13 20:09:26.664071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985275783.mount: Deactivated successfully. Apr 13 20:09:27.417088 containerd[1517]: time="2026-04-13T20:09:27.417041682Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:27.418445 containerd[1517]: time="2026-04-13T20:09:27.418332932Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:09:27.419518 containerd[1517]: time="2026-04-13T20:09:27.419485042Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:27.421855 containerd[1517]: time="2026-04-13T20:09:27.421349592Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:27.421855 containerd[1517]: time="2026-04-13T20:09:27.421778032Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.682374829s" Apr 13 20:09:27.421855 containerd[1517]: time="2026-04-13T20:09:27.421797622Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:09:27.424971 containerd[1517]: time="2026-04-13T20:09:27.424953722Z" level=info msg="CreateContainer within sandbox \"f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:09:27.435852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003244213.mount: Deactivated successfully. Apr 13 20:09:27.438178 containerd[1517]: time="2026-04-13T20:09:27.438157032Z" level=info msg="CreateContainer within sandbox \"f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd\"" Apr 13 20:09:27.438580 containerd[1517]: time="2026-04-13T20:09:27.438552912Z" level=info msg="StartContainer for \"22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd\"" Apr 13 20:09:27.465445 systemd[1]: Started cri-containerd-22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd.scope - libcontainer container 22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd. Apr 13 20:09:27.484632 containerd[1517]: time="2026-04-13T20:09:27.484540532Z" level=info msg="StartContainer for \"22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd\" returns successfully" Apr 13 20:09:28.379849 kubelet[2594]: I0413 20:09:28.379417 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-fn9gz" podStartSLOduration=1.6956749530000002 podStartE2EDuration="4.379398922s" podCreationTimestamp="2026-04-13 20:09:24 +0000 UTC" firstStartedPulling="2026-04-13 20:09:24.738990133 +0000 UTC m=+7.503580401" lastFinishedPulling="2026-04-13 20:09:27.422714112 +0000 UTC m=+10.187304370" observedRunningTime="2026-04-13 20:09:28.378953792 +0000 UTC m=+11.143544090" watchObservedRunningTime="2026-04-13 20:09:28.379398922 +0000 UTC m=+11.143989230" Apr 13 20:09:31.639659 update_engine[1497]: I20260413 20:09:31.639473 1497 update_attempter.cc:509] Updating boot flags... Apr 13 20:09:31.726451 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2974) Apr 13 20:09:31.781454 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2973) Apr 13 20:09:31.833369 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2973) Apr 13 20:09:32.585666 sudo[1733]: pam_unix(sudo:session): session closed for user root Apr 13 20:09:32.617178 sshd[1730]: pam_unix(sshd:session): session closed for user core Apr 13 20:09:32.620912 systemd[1]: sshd@8-62.238.1.80:22-20.229.252.112:37104.service: Deactivated successfully. Apr 13 20:09:32.625273 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:09:32.626591 systemd[1]: session-9.scope: Consumed 2.924s CPU time, 158.8M memory peak, 0B memory swap peak. Apr 13 20:09:32.630241 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:09:32.632877 systemd-logind[1494]: Removed session 9. Apr 13 20:09:34.399465 systemd[1]: Created slice kubepods-besteffort-pod09689a2f_3d56_493e_99e1_93f86b767b10.slice - libcontainer container kubepods-besteffort-pod09689a2f_3d56_493e_99e1_93f86b767b10.slice. Apr 13 20:09:34.423978 kubelet[2594]: I0413 20:09:34.423646 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09689a2f-3d56-493e-99e1-93f86b767b10-tigera-ca-bundle\") pod \"calico-typha-6897689779-2tkv6\" (UID: \"09689a2f-3d56-493e-99e1-93f86b767b10\") " pod="calico-system/calico-typha-6897689779-2tkv6" Apr 13 20:09:34.423978 kubelet[2594]: I0413 20:09:34.423676 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/09689a2f-3d56-493e-99e1-93f86b767b10-typha-certs\") pod \"calico-typha-6897689779-2tkv6\" (UID: \"09689a2f-3d56-493e-99e1-93f86b767b10\") " pod="calico-system/calico-typha-6897689779-2tkv6" Apr 13 20:09:34.423978 kubelet[2594]: I0413 20:09:34.423688 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pps68\" (UniqueName: \"kubernetes.io/projected/09689a2f-3d56-493e-99e1-93f86b767b10-kube-api-access-pps68\") pod \"calico-typha-6897689779-2tkv6\" (UID: \"09689a2f-3d56-493e-99e1-93f86b767b10\") " pod="calico-system/calico-typha-6897689779-2tkv6" Apr 13 20:09:34.458692 systemd[1]: Created slice kubepods-besteffort-podd08bc880_98b3_4231_82ec_c386ec26ca83.slice - libcontainer container kubepods-besteffort-podd08bc880_98b3_4231_82ec_c386ec26ca83.slice. Apr 13 20:09:34.524448 kubelet[2594]: I0413 20:09:34.524407 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-cni-bin-dir\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524448 kubelet[2594]: I0413 20:09:34.524441 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-flexvol-driver-host\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524448 kubelet[2594]: I0413 20:09:34.524454 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d08bc880-98b3-4231-82ec-c386ec26ca83-node-certs\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524624 kubelet[2594]: I0413 20:09:34.524479 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-lib-modules\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524624 kubelet[2594]: I0413 20:09:34.524489 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-var-run-calico\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524624 kubelet[2594]: I0413 20:09:34.524508 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpw4\" (UniqueName: \"kubernetes.io/projected/d08bc880-98b3-4231-82ec-c386ec26ca83-kube-api-access-rjpw4\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524624 kubelet[2594]: I0413 20:09:34.524521 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-bpffs\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524624 kubelet[2594]: I0413 20:09:34.524531 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-nodeproc\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524712 kubelet[2594]: I0413 20:09:34.524540 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-policysync\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524712 kubelet[2594]: I0413 20:09:34.524551 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-sys-fs\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524712 kubelet[2594]: I0413 20:09:34.524560 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d08bc880-98b3-4231-82ec-c386ec26ca83-tigera-ca-bundle\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524712 kubelet[2594]: I0413 20:09:34.524569 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-xtables-lock\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524712 kubelet[2594]: I0413 20:09:34.524585 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-var-lib-calico\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524792 kubelet[2594]: I0413 20:09:34.524596 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-cni-net-dir\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.524792 kubelet[2594]: I0413 20:09:34.524607 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d08bc880-98b3-4231-82ec-c386ec26ca83-cni-log-dir\") pod \"calico-node-xtx8w\" (UID: \"d08bc880-98b3-4231-82ec-c386ec26ca83\") " pod="calico-system/calico-node-xtx8w" Apr 13 20:09:34.581879 kubelet[2594]: E0413 20:09:34.581720 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:34.624972 kubelet[2594]: I0413 20:09:34.624847 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c44a049a-7045-454f-8fa8-94f080c00249-socket-dir\") pod \"csi-node-driver-lb95t\" (UID: \"c44a049a-7045-454f-8fa8-94f080c00249\") " pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:34.624972 kubelet[2594]: I0413 20:09:34.624888 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c44a049a-7045-454f-8fa8-94f080c00249-varrun\") pod \"csi-node-driver-lb95t\" (UID: \"c44a049a-7045-454f-8fa8-94f080c00249\") " pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:34.624972 kubelet[2594]: I0413 20:09:34.624906 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c44a049a-7045-454f-8fa8-94f080c00249-kubelet-dir\") pod \"csi-node-driver-lb95t\" (UID: \"c44a049a-7045-454f-8fa8-94f080c00249\") " pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:34.624972 kubelet[2594]: I0413 20:09:34.624917 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf4nv\" (UniqueName: \"kubernetes.io/projected/c44a049a-7045-454f-8fa8-94f080c00249-kube-api-access-nf4nv\") pod \"csi-node-driver-lb95t\" (UID: \"c44a049a-7045-454f-8fa8-94f080c00249\") " pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:34.624972 kubelet[2594]: I0413 20:09:34.624928 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c44a049a-7045-454f-8fa8-94f080c00249-registration-dir\") pod \"csi-node-driver-lb95t\" (UID: \"c44a049a-7045-454f-8fa8-94f080c00249\") " pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:34.632460 kubelet[2594]: E0413 20:09:34.632439 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.632636 kubelet[2594]: W0413 20:09:34.632520 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.632636 kubelet[2594]: E0413 20:09:34.632538 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.637620 kubelet[2594]: E0413 20:09:34.637594 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.637620 kubelet[2594]: W0413 20:09:34.637612 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.637717 kubelet[2594]: E0413 20:09:34.637628 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.705932 containerd[1517]: time="2026-04-13T20:09:34.705822572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6897689779-2tkv6,Uid:09689a2f-3d56-493e-99e1-93f86b767b10,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:34.726514 kubelet[2594]: E0413 20:09:34.726446 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.726514 kubelet[2594]: W0413 20:09:34.726465 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.726514 kubelet[2594]: E0413 20:09:34.726483 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.727293 kubelet[2594]: E0413 20:09:34.727164 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.727293 kubelet[2594]: W0413 20:09:34.727176 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.727293 kubelet[2594]: E0413 20:09:34.727183 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.727673 kubelet[2594]: E0413 20:09:34.727446 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.727673 kubelet[2594]: W0413 20:09:34.727452 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.727673 kubelet[2594]: E0413 20:09:34.727459 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.728071 kubelet[2594]: E0413 20:09:34.728025 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.728071 kubelet[2594]: W0413 20:09:34.728039 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.728071 kubelet[2594]: E0413 20:09:34.728046 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.728562 kubelet[2594]: E0413 20:09:34.728521 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.728562 kubelet[2594]: W0413 20:09:34.728533 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.728562 kubelet[2594]: E0413 20:09:34.728541 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.729006 kubelet[2594]: E0413 20:09:34.728959 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.729006 kubelet[2594]: W0413 20:09:34.728972 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.729006 kubelet[2594]: E0413 20:09:34.728979 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.729529 kubelet[2594]: E0413 20:09:34.729476 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.729529 kubelet[2594]: W0413 20:09:34.729489 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.729529 kubelet[2594]: E0413 20:09:34.729496 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.729739 kubelet[2594]: E0413 20:09:34.729693 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.729739 kubelet[2594]: W0413 20:09:34.729705 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.729739 kubelet[2594]: E0413 20:09:34.729711 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.730125 kubelet[2594]: E0413 20:09:34.730094 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.730125 kubelet[2594]: W0413 20:09:34.730108 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.730125 kubelet[2594]: E0413 20:09:34.730115 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.730766 kubelet[2594]: E0413 20:09:34.730738 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.730766 kubelet[2594]: W0413 20:09:34.730751 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.730766 kubelet[2594]: E0413 20:09:34.730759 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.731257 kubelet[2594]: E0413 20:09:34.731236 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.731257 kubelet[2594]: W0413 20:09:34.731249 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.731257 kubelet[2594]: E0413 20:09:34.731257 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.731765 kubelet[2594]: E0413 20:09:34.731739 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.731765 kubelet[2594]: W0413 20:09:34.731751 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.731765 kubelet[2594]: E0413 20:09:34.731758 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.732241 kubelet[2594]: E0413 20:09:34.732215 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.732241 kubelet[2594]: W0413 20:09:34.732227 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.732241 kubelet[2594]: E0413 20:09:34.732235 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.733059 kubelet[2594]: E0413 20:09:34.732871 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.733059 kubelet[2594]: W0413 20:09:34.732897 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.733059 kubelet[2594]: E0413 20:09:34.732949 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.733311 containerd[1517]: time="2026-04-13T20:09:34.733141162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:34.733679 containerd[1517]: time="2026-04-13T20:09:34.733448982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:34.733679 containerd[1517]: time="2026-04-13T20:09:34.733515732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:34.734012 kubelet[2594]: E0413 20:09:34.733995 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.734180 kubelet[2594]: W0413 20:09:34.734083 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.734180 kubelet[2594]: E0413 20:09:34.734101 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.734821 containerd[1517]: time="2026-04-13T20:09:34.734585322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:34.735849 kubelet[2594]: E0413 20:09:34.735677 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.735849 kubelet[2594]: W0413 20:09:34.735695 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.735849 kubelet[2594]: E0413 20:09:34.735711 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.736059 kubelet[2594]: E0413 20:09:34.736044 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.736122 kubelet[2594]: W0413 20:09:34.736109 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.736699 kubelet[2594]: E0413 20:09:34.736156 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.736699 kubelet[2594]: E0413 20:09:34.736359 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.736699 kubelet[2594]: W0413 20:09:34.736365 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.736699 kubelet[2594]: E0413 20:09:34.736371 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.736699 kubelet[2594]: E0413 20:09:34.736567 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.736699 kubelet[2594]: W0413 20:09:34.736573 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.736699 kubelet[2594]: E0413 20:09:34.736579 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.736867 kubelet[2594]: E0413 20:09:34.736859 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.737000 kubelet[2594]: W0413 20:09:34.736892 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.737000 kubelet[2594]: E0413 20:09:34.736901 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.737161 kubelet[2594]: E0413 20:09:34.737153 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.737199 kubelet[2594]: W0413 20:09:34.737192 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.737233 kubelet[2594]: E0413 20:09:34.737226 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.738263 kubelet[2594]: E0413 20:09:34.738250 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.738343 kubelet[2594]: W0413 20:09:34.738334 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.738383 kubelet[2594]: E0413 20:09:34.738375 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.738685 kubelet[2594]: E0413 20:09:34.738675 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.738724 kubelet[2594]: W0413 20:09:34.738717 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.738770 kubelet[2594]: E0413 20:09:34.738762 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.739711 kubelet[2594]: E0413 20:09:34.739700 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.739767 kubelet[2594]: W0413 20:09:34.739760 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.740422 kubelet[2594]: E0413 20:09:34.739889 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.740754 kubelet[2594]: E0413 20:09:34.740707 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.740965 kubelet[2594]: W0413 20:09:34.740908 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.741221 kubelet[2594]: E0413 20:09:34.741103 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.747555 kubelet[2594]: E0413 20:09:34.747477 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:34.747555 kubelet[2594]: W0413 20:09:34.747490 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:34.747555 kubelet[2594]: E0413 20:09:34.747530 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:34.755438 systemd[1]: Started cri-containerd-536499c9ad8d760287261e33173569fef0b4baa88a37575ef97862624d91ddc4.scope - libcontainer container 536499c9ad8d760287261e33173569fef0b4baa88a37575ef97862624d91ddc4. Apr 13 20:09:34.765399 containerd[1517]: time="2026-04-13T20:09:34.765160262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xtx8w,Uid:d08bc880-98b3-4231-82ec-c386ec26ca83,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:34.788531 containerd[1517]: time="2026-04-13T20:09:34.788185642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6897689779-2tkv6,Uid:09689a2f-3d56-493e-99e1-93f86b767b10,Namespace:calico-system,Attempt:0,} returns sandbox id \"536499c9ad8d760287261e33173569fef0b4baa88a37575ef97862624d91ddc4\"" Apr 13 20:09:34.791531 containerd[1517]: time="2026-04-13T20:09:34.790758212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:34.791531 containerd[1517]: time="2026-04-13T20:09:34.790854442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:34.791531 containerd[1517]: time="2026-04-13T20:09:34.790864932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:34.791531 containerd[1517]: time="2026-04-13T20:09:34.790952972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:34.792872 containerd[1517]: time="2026-04-13T20:09:34.792757202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:09:34.812448 systemd[1]: Started cri-containerd-cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976.scope - libcontainer container cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976. Apr 13 20:09:34.833200 containerd[1517]: time="2026-04-13T20:09:34.833166582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xtx8w,Uid:d08bc880-98b3-4231-82ec-c386ec26ca83,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\"" Apr 13 20:09:36.219997 kubelet[2594]: E0413 20:09:36.219944 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.219997 kubelet[2594]: W0413 20:09:36.219978 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.220787 kubelet[2594]: E0413 20:09:36.220006 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.220787 kubelet[2594]: E0413 20:09:36.220676 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.220787 kubelet[2594]: W0413 20:09:36.220692 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.220787 kubelet[2594]: E0413 20:09:36.220710 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.221308 kubelet[2594]: E0413 20:09:36.221265 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.221308 kubelet[2594]: W0413 20:09:36.221295 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.221474 kubelet[2594]: E0413 20:09:36.221348 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.221917 kubelet[2594]: E0413 20:09:36.221895 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.221917 kubelet[2594]: W0413 20:09:36.221914 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.222059 kubelet[2594]: E0413 20:09:36.221933 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.222493 kubelet[2594]: E0413 20:09:36.222434 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.222493 kubelet[2594]: W0413 20:09:36.222458 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.222493 kubelet[2594]: E0413 20:09:36.222476 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.222940 kubelet[2594]: E0413 20:09:36.222912 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.222940 kubelet[2594]: W0413 20:09:36.222931 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.223062 kubelet[2594]: E0413 20:09:36.222946 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.223384 kubelet[2594]: E0413 20:09:36.223355 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.223384 kubelet[2594]: W0413 20:09:36.223377 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.223533 kubelet[2594]: E0413 20:09:36.223392 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.223875 kubelet[2594]: E0413 20:09:36.223843 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.223875 kubelet[2594]: W0413 20:09:36.223866 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.223968 kubelet[2594]: E0413 20:09:36.223882 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.224432 kubelet[2594]: E0413 20:09:36.224396 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.224432 kubelet[2594]: W0413 20:09:36.224415 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.224432 kubelet[2594]: E0413 20:09:36.224431 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.224913 kubelet[2594]: E0413 20:09:36.224875 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.224913 kubelet[2594]: W0413 20:09:36.224901 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.224913 kubelet[2594]: E0413 20:09:36.224920 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.225652 kubelet[2594]: E0413 20:09:36.225432 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.225652 kubelet[2594]: W0413 20:09:36.225453 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.225652 kubelet[2594]: E0413 20:09:36.225470 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.226167 kubelet[2594]: E0413 20:09:36.225943 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.226167 kubelet[2594]: W0413 20:09:36.225959 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.226167 kubelet[2594]: E0413 20:09:36.225976 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.226587 kubelet[2594]: E0413 20:09:36.226522 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.226587 kubelet[2594]: W0413 20:09:36.226545 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.226587 kubelet[2594]: E0413 20:09:36.226583 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.227068 kubelet[2594]: E0413 20:09:36.227030 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.227068 kubelet[2594]: W0413 20:09:36.227055 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.227208 kubelet[2594]: E0413 20:09:36.227074 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.227671 kubelet[2594]: E0413 20:09:36.227630 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:36.227671 kubelet[2594]: W0413 20:09:36.227654 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:36.227671 kubelet[2594]: E0413 20:09:36.227671 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:36.317492 kubelet[2594]: E0413 20:09:36.316968 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:36.856784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121188137.mount: Deactivated successfully. Apr 13 20:09:37.656724 containerd[1517]: time="2026-04-13T20:09:37.656669661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.657720 containerd[1517]: time="2026-04-13T20:09:37.657673581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:09:37.658658 containerd[1517]: time="2026-04-13T20:09:37.658610369Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.660720 containerd[1517]: time="2026-04-13T20:09:37.660589669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:37.661067 containerd[1517]: time="2026-04-13T20:09:37.661032597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.868227744s" Apr 13 20:09:37.661108 containerd[1517]: time="2026-04-13T20:09:37.661068388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:09:37.662822 containerd[1517]: time="2026-04-13T20:09:37.662436693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:09:37.674024 containerd[1517]: time="2026-04-13T20:09:37.673980868Z" level=info msg="CreateContainer within sandbox \"536499c9ad8d760287261e33173569fef0b4baa88a37575ef97862624d91ddc4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:09:37.685691 containerd[1517]: time="2026-04-13T20:09:37.685665359Z" level=info msg="CreateContainer within sandbox \"536499c9ad8d760287261e33173569fef0b4baa88a37575ef97862624d91ddc4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"04ec0c23ead6c271954ee3281803157112725ba69d9258eaa3c3b679ad2e2a37\"" Apr 13 20:09:37.686026 containerd[1517]: time="2026-04-13T20:09:37.685931930Z" level=info msg="StartContainer for \"04ec0c23ead6c271954ee3281803157112725ba69d9258eaa3c3b679ad2e2a37\"" Apr 13 20:09:37.721452 systemd[1]: Started cri-containerd-04ec0c23ead6c271954ee3281803157112725ba69d9258eaa3c3b679ad2e2a37.scope - libcontainer container 04ec0c23ead6c271954ee3281803157112725ba69d9258eaa3c3b679ad2e2a37. Apr 13 20:09:37.753580 containerd[1517]: time="2026-04-13T20:09:37.753543134Z" level=info msg="StartContainer for \"04ec0c23ead6c271954ee3281803157112725ba69d9258eaa3c3b679ad2e2a37\" returns successfully" Apr 13 20:09:38.317921 kubelet[2594]: E0413 20:09:38.316843 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:38.447028 kubelet[2594]: E0413 20:09:38.446548 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.447028 kubelet[2594]: W0413 20:09:38.446578 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.447028 kubelet[2594]: E0413 20:09:38.446604 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.447440 kubelet[2594]: E0413 20:09:38.447282 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.447440 kubelet[2594]: W0413 20:09:38.447298 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.447440 kubelet[2594]: E0413 20:09:38.447338 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.447874 kubelet[2594]: E0413 20:09:38.447745 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.447874 kubelet[2594]: W0413 20:09:38.447758 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.447874 kubelet[2594]: E0413 20:09:38.447771 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.448035 kubelet[2594]: E0413 20:09:38.448024 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.448166 kubelet[2594]: W0413 20:09:38.448084 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.448166 kubelet[2594]: E0413 20:09:38.448099 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.448489 kubelet[2594]: E0413 20:09:38.448422 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.448489 kubelet[2594]: W0413 20:09:38.448433 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.448489 kubelet[2594]: E0413 20:09:38.448444 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.448851 kubelet[2594]: E0413 20:09:38.448777 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.448851 kubelet[2594]: W0413 20:09:38.448787 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.448851 kubelet[2594]: E0413 20:09:38.448796 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.449156 kubelet[2594]: E0413 20:09:38.449145 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.449258 kubelet[2594]: W0413 20:09:38.449199 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.449258 kubelet[2594]: E0413 20:09:38.449211 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.449609 kubelet[2594]: E0413 20:09:38.449531 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.449609 kubelet[2594]: W0413 20:09:38.449541 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.449609 kubelet[2594]: E0413 20:09:38.449551 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.450029 kubelet[2594]: E0413 20:09:38.449904 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.450029 kubelet[2594]: W0413 20:09:38.449913 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.450029 kubelet[2594]: E0413 20:09:38.449923 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.450432 kubelet[2594]: E0413 20:09:38.450303 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.450432 kubelet[2594]: W0413 20:09:38.450343 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.450432 kubelet[2594]: E0413 20:09:38.450355 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.452518 kubelet[2594]: E0413 20:09:38.452435 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.452518 kubelet[2594]: W0413 20:09:38.452448 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.452518 kubelet[2594]: E0413 20:09:38.452458 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.452902 kubelet[2594]: E0413 20:09:38.452832 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.452902 kubelet[2594]: W0413 20:09:38.452843 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.452902 kubelet[2594]: E0413 20:09:38.452852 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.453327 kubelet[2594]: E0413 20:09:38.453210 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.453327 kubelet[2594]: W0413 20:09:38.453220 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.453327 kubelet[2594]: E0413 20:09:38.453229 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.453603 kubelet[2594]: E0413 20:09:38.453492 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.453603 kubelet[2594]: W0413 20:09:38.453503 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.453603 kubelet[2594]: E0413 20:09:38.453512 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.453774 kubelet[2594]: E0413 20:09:38.453764 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.453822 kubelet[2594]: W0413 20:09:38.453813 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.455336 kubelet[2594]: E0413 20:09:38.453861 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.457243 kubelet[2594]: E0413 20:09:38.457229 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.457490 kubelet[2594]: W0413 20:09:38.457333 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.457490 kubelet[2594]: E0413 20:09:38.457348 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.457682 kubelet[2594]: E0413 20:09:38.457623 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.457738 kubelet[2594]: W0413 20:09:38.457727 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.457784 kubelet[2594]: E0413 20:09:38.457774 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.458143 kubelet[2594]: E0413 20:09:38.458113 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.458143 kubelet[2594]: W0413 20:09:38.458123 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.458143 kubelet[2594]: E0413 20:09:38.458131 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.459522 kubelet[2594]: E0413 20:09:38.459392 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.459522 kubelet[2594]: W0413 20:09:38.459406 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.459522 kubelet[2594]: E0413 20:09:38.459418 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.459721 kubelet[2594]: E0413 20:09:38.459710 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.459796 kubelet[2594]: W0413 20:09:38.459772 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.459796 kubelet[2594]: E0413 20:09:38.459784 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.461336 kubelet[2594]: E0413 20:09:38.460144 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.461336 kubelet[2594]: W0413 20:09:38.460154 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.461336 kubelet[2594]: E0413 20:09:38.460163 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.461760 kubelet[2594]: E0413 20:09:38.461681 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.461760 kubelet[2594]: W0413 20:09:38.461693 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.461760 kubelet[2594]: E0413 20:09:38.461703 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.462199 kubelet[2594]: E0413 20:09:38.462094 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.462199 kubelet[2594]: W0413 20:09:38.462104 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.462199 kubelet[2594]: E0413 20:09:38.462113 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.462648 kubelet[2594]: E0413 20:09:38.462533 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.462648 kubelet[2594]: W0413 20:09:38.462544 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.462648 kubelet[2594]: E0413 20:09:38.462554 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.462848 kubelet[2594]: E0413 20:09:38.462816 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.462848 kubelet[2594]: W0413 20:09:38.462827 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.462990 kubelet[2594]: E0413 20:09:38.462930 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.464529 kubelet[2594]: E0413 20:09:38.464494 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.464529 kubelet[2594]: W0413 20:09:38.464505 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.464529 kubelet[2594]: E0413 20:09:38.464516 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.465035 kubelet[2594]: E0413 20:09:38.464929 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.465035 kubelet[2594]: W0413 20:09:38.464940 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.465035 kubelet[2594]: E0413 20:09:38.464949 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.465673 kubelet[2594]: E0413 20:09:38.465556 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.465673 kubelet[2594]: W0413 20:09:38.465566 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.465673 kubelet[2594]: E0413 20:09:38.465577 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.466096 kubelet[2594]: E0413 20:09:38.465814 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.466096 kubelet[2594]: W0413 20:09:38.465824 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.466096 kubelet[2594]: E0413 20:09:38.465833 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.467022 kubelet[2594]: E0413 20:09:38.467010 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.467094 kubelet[2594]: W0413 20:09:38.467083 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.467618 kubelet[2594]: E0413 20:09:38.467601 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.469523 kubelet[2594]: E0413 20:09:38.469386 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.469523 kubelet[2594]: W0413 20:09:38.469398 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.469523 kubelet[2594]: E0413 20:09:38.469409 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.469684 kubelet[2594]: E0413 20:09:38.469673 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.470456 kubelet[2594]: W0413 20:09:38.469730 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.470456 kubelet[2594]: E0413 20:09:38.469743 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:38.470592 kubelet[2594]: E0413 20:09:38.470580 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:38.470691 kubelet[2594]: W0413 20:09:38.470629 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:38.470691 kubelet[2594]: E0413 20:09:38.470651 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.395988 kubelet[2594]: I0413 20:09:39.395885 2594 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:09:39.460836 kubelet[2594]: E0413 20:09:39.460777 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.460836 kubelet[2594]: W0413 20:09:39.460815 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.460836 kubelet[2594]: E0413 20:09:39.460845 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.461432 kubelet[2594]: E0413 20:09:39.461382 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.461432 kubelet[2594]: W0413 20:09:39.461415 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.461598 kubelet[2594]: E0413 20:09:39.461443 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.462059 kubelet[2594]: E0413 20:09:39.462004 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.462059 kubelet[2594]: W0413 20:09:39.462033 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.462059 kubelet[2594]: E0413 20:09:39.462050 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.462655 kubelet[2594]: E0413 20:09:39.462611 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.462655 kubelet[2594]: W0413 20:09:39.462643 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.462839 kubelet[2594]: E0413 20:09:39.462666 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.463290 kubelet[2594]: E0413 20:09:39.463244 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.463290 kubelet[2594]: W0413 20:09:39.463275 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.463472 kubelet[2594]: E0413 20:09:39.463299 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.464103 kubelet[2594]: E0413 20:09:39.463904 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.464103 kubelet[2594]: W0413 20:09:39.463928 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.464103 kubelet[2594]: E0413 20:09:39.463948 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.464830 kubelet[2594]: E0413 20:09:39.464508 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.464830 kubelet[2594]: W0413 20:09:39.464536 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.464830 kubelet[2594]: E0413 20:09:39.464554 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.465434 kubelet[2594]: E0413 20:09:39.465162 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.465434 kubelet[2594]: W0413 20:09:39.465188 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.465434 kubelet[2594]: E0413 20:09:39.465211 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.465983 kubelet[2594]: E0413 20:09:39.465938 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.465983 kubelet[2594]: W0413 20:09:39.465967 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.465983 kubelet[2594]: E0413 20:09:39.465984 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.466635 kubelet[2594]: E0413 20:09:39.466592 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.466635 kubelet[2594]: W0413 20:09:39.466629 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.466810 kubelet[2594]: E0413 20:09:39.466653 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.467363 kubelet[2594]: E0413 20:09:39.467210 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.467363 kubelet[2594]: W0413 20:09:39.467227 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.467363 kubelet[2594]: E0413 20:09:39.467245 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.467835 kubelet[2594]: E0413 20:09:39.467795 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.467835 kubelet[2594]: W0413 20:09:39.467821 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.467835 kubelet[2594]: E0413 20:09:39.467839 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.468552 kubelet[2594]: E0413 20:09:39.468508 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.468552 kubelet[2594]: W0413 20:09:39.468540 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.468859 kubelet[2594]: E0413 20:09:39.468559 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.469350 kubelet[2594]: E0413 20:09:39.469091 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.469350 kubelet[2594]: W0413 20:09:39.469112 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.469350 kubelet[2594]: E0413 20:09:39.469129 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.469687 kubelet[2594]: E0413 20:09:39.469652 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.469687 kubelet[2594]: W0413 20:09:39.469670 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.469687 kubelet[2594]: E0413 20:09:39.469687 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.470256 kubelet[2594]: E0413 20:09:39.470197 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.470256 kubelet[2594]: W0413 20:09:39.470215 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.470256 kubelet[2594]: E0413 20:09:39.470230 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.470870 kubelet[2594]: E0413 20:09:39.470837 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.470870 kubelet[2594]: W0413 20:09:39.470858 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.471170 kubelet[2594]: E0413 20:09:39.470873 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.473444 kubelet[2594]: E0413 20:09:39.473141 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.473444 kubelet[2594]: W0413 20:09:39.473170 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.473444 kubelet[2594]: E0413 20:09:39.473195 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.474399 kubelet[2594]: E0413 20:09:39.474107 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.474399 kubelet[2594]: W0413 20:09:39.474135 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.474399 kubelet[2594]: E0413 20:09:39.474157 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.475231 kubelet[2594]: E0413 20:09:39.474980 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.475231 kubelet[2594]: W0413 20:09:39.475002 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.475231 kubelet[2594]: E0413 20:09:39.475024 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.475831 kubelet[2594]: E0413 20:09:39.475789 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.475831 kubelet[2594]: W0413 20:09:39.475819 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.475962 kubelet[2594]: E0413 20:09:39.475842 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.476603 kubelet[2594]: E0413 20:09:39.476558 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.476603 kubelet[2594]: W0413 20:09:39.476589 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.476759 kubelet[2594]: E0413 20:09:39.476612 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.477450 kubelet[2594]: E0413 20:09:39.477403 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.477450 kubelet[2594]: W0413 20:09:39.477435 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.477587 kubelet[2594]: E0413 20:09:39.477460 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.478066 kubelet[2594]: E0413 20:09:39.478028 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.478066 kubelet[2594]: W0413 20:09:39.478053 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.478182 kubelet[2594]: E0413 20:09:39.478071 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.478647 kubelet[2594]: E0413 20:09:39.478609 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.478647 kubelet[2594]: W0413 20:09:39.478636 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.478847 kubelet[2594]: E0413 20:09:39.478658 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.479203 kubelet[2594]: E0413 20:09:39.479157 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.479203 kubelet[2594]: W0413 20:09:39.479186 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.479203 kubelet[2594]: E0413 20:09:39.479205 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.479865 kubelet[2594]: E0413 20:09:39.479823 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.479865 kubelet[2594]: W0413 20:09:39.479854 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.479970 kubelet[2594]: E0413 20:09:39.479872 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.480497 kubelet[2594]: E0413 20:09:39.480462 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.480497 kubelet[2594]: W0413 20:09:39.480487 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.480597 kubelet[2594]: E0413 20:09:39.480503 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.481412 kubelet[2594]: E0413 20:09:39.481379 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.481412 kubelet[2594]: W0413 20:09:39.481402 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.481555 kubelet[2594]: E0413 20:09:39.481422 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.481942 kubelet[2594]: E0413 20:09:39.481902 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.481942 kubelet[2594]: W0413 20:09:39.481929 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.482067 kubelet[2594]: E0413 20:09:39.481948 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.482534 kubelet[2594]: E0413 20:09:39.482498 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.482534 kubelet[2594]: W0413 20:09:39.482519 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.482654 kubelet[2594]: E0413 20:09:39.482538 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.483947 kubelet[2594]: E0413 20:09:39.483906 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.483947 kubelet[2594]: W0413 20:09:39.483939 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.484076 kubelet[2594]: E0413 20:09:39.483961 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.484526 kubelet[2594]: E0413 20:09:39.484492 2594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:09:39.484526 kubelet[2594]: W0413 20:09:39.484515 2594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:09:39.484612 kubelet[2594]: E0413 20:09:39.484532 2594 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:09:39.750452 containerd[1517]: time="2026-04-13T20:09:39.750409569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.751574 containerd[1517]: time="2026-04-13T20:09:39.751484747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:09:39.752753 containerd[1517]: time="2026-04-13T20:09:39.752541694Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.754446 containerd[1517]: time="2026-04-13T20:09:39.754426651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:39.754887 containerd[1517]: time="2026-04-13T20:09:39.754855306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.092400012s" Apr 13 20:09:39.754918 containerd[1517]: time="2026-04-13T20:09:39.754892958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:09:39.758394 containerd[1517]: time="2026-04-13T20:09:39.758370191Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:09:39.772488 containerd[1517]: time="2026-04-13T20:09:39.772462240Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690\"" Apr 13 20:09:39.773543 containerd[1517]: time="2026-04-13T20:09:39.773518427Z" level=info msg="StartContainer for \"7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690\"" Apr 13 20:09:39.800046 systemd[1]: run-containerd-runc-k8s.io-7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690-runc.rMW72b.mount: Deactivated successfully. Apr 13 20:09:39.810428 systemd[1]: Started cri-containerd-7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690.scope - libcontainer container 7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690. Apr 13 20:09:39.831485 containerd[1517]: time="2026-04-13T20:09:39.831449579Z" level=info msg="StartContainer for \"7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690\" returns successfully" Apr 13 20:09:39.842819 systemd[1]: cri-containerd-7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690.scope: Deactivated successfully. Apr 13 20:09:39.860444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690-rootfs.mount: Deactivated successfully. Apr 13 20:09:39.924724 containerd[1517]: time="2026-04-13T20:09:39.924534016Z" level=info msg="shim disconnected" id=7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690 namespace=k8s.io Apr 13 20:09:39.924724 containerd[1517]: time="2026-04-13T20:09:39.924638140Z" level=warning msg="cleaning up after shim disconnected" id=7c5a6054475919f12066fb645ba6514aa5dfa23b78f5061ee9245688ae756690 namespace=k8s.io Apr 13 20:09:39.924724 containerd[1517]: time="2026-04-13T20:09:39.924652160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:09:40.317448 kubelet[2594]: E0413 20:09:40.317381 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:40.403575 containerd[1517]: time="2026-04-13T20:09:40.403194379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:09:40.429958 kubelet[2594]: I0413 20:09:40.429046 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-6897689779-2tkv6" podStartSLOduration=3.55813172 podStartE2EDuration="6.429029737s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:09:34.791184392 +0000 UTC m=+17.555774650" lastFinishedPulling="2026-04-13 20:09:37.662082409 +0000 UTC m=+20.426672667" observedRunningTime="2026-04-13 20:09:38.408412548 +0000 UTC m=+21.173002806" watchObservedRunningTime="2026-04-13 20:09:40.429029737 +0000 UTC m=+23.193620035" Apr 13 20:09:42.318034 kubelet[2594]: E0413 20:09:42.317924 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:44.317112 kubelet[2594]: E0413 20:09:44.317013 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:46.317798 kubelet[2594]: E0413 20:09:46.317487 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:46.923854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068791725.mount: Deactivated successfully. Apr 13 20:09:46.954223 containerd[1517]: time="2026-04-13T20:09:46.954160458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:46.955150 containerd[1517]: time="2026-04-13T20:09:46.954987127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:09:46.957006 containerd[1517]: time="2026-04-13T20:09:46.956196564Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:46.958087 containerd[1517]: time="2026-04-13T20:09:46.958041865Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:46.958988 containerd[1517]: time="2026-04-13T20:09:46.958450965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.555205775s" Apr 13 20:09:46.958988 containerd[1517]: time="2026-04-13T20:09:46.958483715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:09:46.962577 containerd[1517]: time="2026-04-13T20:09:46.962551027Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:09:46.978252 containerd[1517]: time="2026-04-13T20:09:46.978198340Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3\"" Apr 13 20:09:46.979925 containerd[1517]: time="2026-04-13T20:09:46.978768283Z" level=info msg="StartContainer for \"14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3\"" Apr 13 20:09:47.012514 systemd[1]: Started cri-containerd-14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3.scope - libcontainer container 14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3. Apr 13 20:09:47.037499 containerd[1517]: time="2026-04-13T20:09:47.037473694Z" level=info msg="StartContainer for \"14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3\" returns successfully" Apr 13 20:09:47.076637 systemd[1]: cri-containerd-14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3.scope: Deactivated successfully. Apr 13 20:09:47.172553 containerd[1517]: time="2026-04-13T20:09:47.172361764Z" level=info msg="shim disconnected" id=14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3 namespace=k8s.io Apr 13 20:09:47.172553 containerd[1517]: time="2026-04-13T20:09:47.172407875Z" level=warning msg="cleaning up after shim disconnected" id=14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3 namespace=k8s.io Apr 13 20:09:47.172553 containerd[1517]: time="2026-04-13T20:09:47.172415956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:09:47.419618 containerd[1517]: time="2026-04-13T20:09:47.419476667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:09:47.927057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14996d775502609e02255aba45981aa558014c78f0b8f6627a5e26e20721acd3-rootfs.mount: Deactivated successfully. Apr 13 20:09:48.093391 kubelet[2594]: I0413 20:09:48.092605 2594 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:09:48.317818 kubelet[2594]: E0413 20:09:48.317711 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:50.317777 kubelet[2594]: E0413 20:09:50.317201 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:51.013278 containerd[1517]: time="2026-04-13T20:09:51.013230297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:51.014562 containerd[1517]: time="2026-04-13T20:09:51.014423807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:09:51.015751 containerd[1517]: time="2026-04-13T20:09:51.015512135Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:51.017460 containerd[1517]: time="2026-04-13T20:09:51.017424096Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:51.018063 containerd[1517]: time="2026-04-13T20:09:51.017906354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.598361144s" Apr 13 20:09:51.018063 containerd[1517]: time="2026-04-13T20:09:51.017931364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:09:51.022061 containerd[1517]: time="2026-04-13T20:09:51.022021971Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:09:51.038004 containerd[1517]: time="2026-04-13T20:09:51.037958151Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221\"" Apr 13 20:09:51.039361 containerd[1517]: time="2026-04-13T20:09:51.038442789Z" level=info msg="StartContainer for \"4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221\"" Apr 13 20:09:51.068452 systemd[1]: Started cri-containerd-4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221.scope - libcontainer container 4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221. Apr 13 20:09:51.100658 containerd[1517]: time="2026-04-13T20:09:51.100499852Z" level=info msg="StartContainer for \"4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221\" returns successfully" Apr 13 20:09:51.477138 containerd[1517]: time="2026-04-13T20:09:51.476988239Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:09:51.479944 systemd[1]: cri-containerd-4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221.scope: Deactivated successfully. Apr 13 20:09:51.497107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221-rootfs.mount: Deactivated successfully. Apr 13 20:09:51.499826 containerd[1517]: time="2026-04-13T20:09:51.499774971Z" level=info msg="shim disconnected" id=4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221 namespace=k8s.io Apr 13 20:09:51.499826 containerd[1517]: time="2026-04-13T20:09:51.499822461Z" level=warning msg="cleaning up after shim disconnected" id=4cc676507a7447f4430f1b2616a497a716abd3902c625a5af7e2817682982221 namespace=k8s.io Apr 13 20:09:51.500013 containerd[1517]: time="2026-04-13T20:09:51.499829412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:09:51.552405 kubelet[2594]: I0413 20:09:51.552373 2594 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 13 20:09:51.586502 systemd[1]: Created slice kubepods-burstable-pod5854f860_cb7b_4fb3_a3bb_bd6d5b5ffbe3.slice - libcontainer container kubepods-burstable-pod5854f860_cb7b_4fb3_a3bb_bd6d5b5ffbe3.slice. Apr 13 20:09:51.595148 systemd[1]: Created slice kubepods-besteffort-podc2f871d5_f5b7_4d50_817f_a1527db6a36c.slice - libcontainer container kubepods-besteffort-podc2f871d5_f5b7_4d50_817f_a1527db6a36c.slice. Apr 13 20:09:51.601950 systemd[1]: Created slice kubepods-besteffort-podd28dedcd_57de_44c6_aaf6_ca79c2dd6518.slice - libcontainer container kubepods-besteffort-podd28dedcd_57de_44c6_aaf6_ca79c2dd6518.slice. Apr 13 20:09:51.613984 systemd[1]: Created slice kubepods-besteffort-pod06d5d9e8_07e3_4ab0_a898_9bb864910c9c.slice - libcontainer container kubepods-besteffort-pod06d5d9e8_07e3_4ab0_a898_9bb864910c9c.slice. Apr 13 20:09:51.621494 systemd[1]: Created slice kubepods-besteffort-podf935bdd7_a5d9_4d40_8315_4896038786b4.slice - libcontainer container kubepods-besteffort-podf935bdd7_a5d9_4d40_8315_4896038786b4.slice. Apr 13 20:09:51.628800 systemd[1]: Created slice kubepods-burstable-pod7f7c1fad_6f07_437d_826f_867809010e65.slice - libcontainer container kubepods-burstable-pod7f7c1fad_6f07_437d_826f_867809010e65.slice. Apr 13 20:09:51.634535 systemd[1]: Created slice kubepods-besteffort-podac22c161_aa7d_4a94_a2de_6ea0122df0fc.slice - libcontainer container kubepods-besteffort-podac22c161_aa7d_4a94_a2de_6ea0122df0fc.slice. Apr 13 20:09:51.661587 kubelet[2594]: I0413 20:09:51.661557 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-nginx-config\") pod \"whisker-68455c66c8-cn7ch\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " pod="calico-system/whisker-68455c66c8-cn7ch" Apr 13 20:09:51.661754 kubelet[2594]: I0413 20:09:51.661743 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ac22c161-aa7d-4a94-a2de-6ea0122df0fc-goldmane-key-pair\") pod \"goldmane-9f7667bb8-t47rf\" (UID: \"ac22c161-aa7d-4a94-a2de-6ea0122df0fc\") " pod="calico-system/goldmane-9f7667bb8-t47rf" Apr 13 20:09:51.661812 kubelet[2594]: I0413 20:09:51.661804 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3-config-volume\") pod \"coredns-7d764666f9-ljtsp\" (UID: \"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3\") " pod="kube-system/coredns-7d764666f9-ljtsp" Apr 13 20:09:51.661864 kubelet[2594]: I0413 20:09:51.661846 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-ca-bundle\") pod \"whisker-68455c66c8-cn7ch\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " pod="calico-system/whisker-68455c66c8-cn7ch" Apr 13 20:09:51.661864 kubelet[2594]: I0413 20:09:51.661861 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f935bdd7-a5d9-4d40-8315-4896038786b4-calico-apiserver-certs\") pod \"calico-apiserver-799d497f46-xzwkq\" (UID: \"f935bdd7-a5d9-4d40-8315-4896038786b4\") " pod="calico-system/calico-apiserver-799d497f46-xzwkq" Apr 13 20:09:51.661947 kubelet[2594]: I0413 20:09:51.661874 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64rpl\" (UniqueName: \"kubernetes.io/projected/d28dedcd-57de-44c6-aaf6-ca79c2dd6518-kube-api-access-64rpl\") pod \"calico-kube-controllers-76fdf6b58f-sfc6f\" (UID: \"d28dedcd-57de-44c6-aaf6-ca79c2dd6518\") " pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" Apr 13 20:09:51.661947 kubelet[2594]: I0413 20:09:51.661886 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-backend-key-pair\") pod \"whisker-68455c66c8-cn7ch\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " pod="calico-system/whisker-68455c66c8-cn7ch" Apr 13 20:09:51.661947 kubelet[2594]: I0413 20:09:51.661896 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c2f871d5-f5b7-4d50-817f-a1527db6a36c-calico-apiserver-certs\") pod \"calico-apiserver-799d497f46-qlmsr\" (UID: \"c2f871d5-f5b7-4d50-817f-a1527db6a36c\") " pod="calico-system/calico-apiserver-799d497f46-qlmsr" Apr 13 20:09:51.661947 kubelet[2594]: I0413 20:09:51.661906 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mf26\" (UniqueName: \"kubernetes.io/projected/ac22c161-aa7d-4a94-a2de-6ea0122df0fc-kube-api-access-7mf26\") pod \"goldmane-9f7667bb8-t47rf\" (UID: \"ac22c161-aa7d-4a94-a2de-6ea0122df0fc\") " pod="calico-system/goldmane-9f7667bb8-t47rf" Apr 13 20:09:51.661947 kubelet[2594]: I0413 20:09:51.661918 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmm4c\" (UniqueName: \"kubernetes.io/projected/5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3-kube-api-access-nmm4c\") pod \"coredns-7d764666f9-ljtsp\" (UID: \"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3\") " pod="kube-system/coredns-7d764666f9-ljtsp" Apr 13 20:09:51.662076 kubelet[2594]: I0413 20:09:51.661928 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmgn2\" (UniqueName: \"kubernetes.io/projected/f935bdd7-a5d9-4d40-8315-4896038786b4-kube-api-access-cmgn2\") pod \"calico-apiserver-799d497f46-xzwkq\" (UID: \"f935bdd7-a5d9-4d40-8315-4896038786b4\") " pod="calico-system/calico-apiserver-799d497f46-xzwkq" Apr 13 20:09:51.662076 kubelet[2594]: I0413 20:09:51.661941 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d28dedcd-57de-44c6-aaf6-ca79c2dd6518-tigera-ca-bundle\") pod \"calico-kube-controllers-76fdf6b58f-sfc6f\" (UID: \"d28dedcd-57de-44c6-aaf6-ca79c2dd6518\") " pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" Apr 13 20:09:51.662076 kubelet[2594]: I0413 20:09:51.661951 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f684x\" (UniqueName: \"kubernetes.io/projected/7f7c1fad-6f07-437d-826f-867809010e65-kube-api-access-f684x\") pod \"coredns-7d764666f9-rk9zl\" (UID: \"7f7c1fad-6f07-437d-826f-867809010e65\") " pod="kube-system/coredns-7d764666f9-rk9zl" Apr 13 20:09:51.662076 kubelet[2594]: I0413 20:09:51.661962 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzr95\" (UniqueName: \"kubernetes.io/projected/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-kube-api-access-mzr95\") pod \"whisker-68455c66c8-cn7ch\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " pod="calico-system/whisker-68455c66c8-cn7ch" Apr 13 20:09:51.662076 kubelet[2594]: I0413 20:09:51.661972 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxncv\" (UniqueName: \"kubernetes.io/projected/c2f871d5-f5b7-4d50-817f-a1527db6a36c-kube-api-access-rxncv\") pod \"calico-apiserver-799d497f46-qlmsr\" (UID: \"c2f871d5-f5b7-4d50-817f-a1527db6a36c\") " pod="calico-system/calico-apiserver-799d497f46-qlmsr" Apr 13 20:09:51.662165 kubelet[2594]: I0413 20:09:51.661981 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f7c1fad-6f07-437d-826f-867809010e65-config-volume\") pod \"coredns-7d764666f9-rk9zl\" (UID: \"7f7c1fad-6f07-437d-826f-867809010e65\") " pod="kube-system/coredns-7d764666f9-rk9zl" Apr 13 20:09:51.662165 kubelet[2594]: I0413 20:09:51.661991 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac22c161-aa7d-4a94-a2de-6ea0122df0fc-config\") pod \"goldmane-9f7667bb8-t47rf\" (UID: \"ac22c161-aa7d-4a94-a2de-6ea0122df0fc\") " pod="calico-system/goldmane-9f7667bb8-t47rf" Apr 13 20:09:51.662165 kubelet[2594]: I0413 20:09:51.662001 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac22c161-aa7d-4a94-a2de-6ea0122df0fc-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-t47rf\" (UID: \"ac22c161-aa7d-4a94-a2de-6ea0122df0fc\") " pod="calico-system/goldmane-9f7667bb8-t47rf" Apr 13 20:09:51.897472 containerd[1517]: time="2026-04-13T20:09:51.897409242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ljtsp,Uid:5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:51.901949 containerd[1517]: time="2026-04-13T20:09:51.901906656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-qlmsr,Uid:c2f871d5-f5b7-4d50-817f-a1527db6a36c,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.911112 containerd[1517]: time="2026-04-13T20:09:51.910983674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fdf6b58f-sfc6f,Uid:d28dedcd-57de-44c6-aaf6-ca79c2dd6518,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.921778 containerd[1517]: time="2026-04-13T20:09:51.921447175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68455c66c8-cn7ch,Uid:06d5d9e8-07e3-4ab0-a898-9bb864910c9c,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.927984 containerd[1517]: time="2026-04-13T20:09:51.927923301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-xzwkq,Uid:f935bdd7-a5d9-4d40-8315-4896038786b4,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:51.936708 containerd[1517]: time="2026-04-13T20:09:51.936672793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rk9zl,Uid:7f7c1fad-6f07-437d-826f-867809010e65,Namespace:kube-system,Attempt:0,}" Apr 13 20:09:51.940511 containerd[1517]: time="2026-04-13T20:09:51.940426505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t47rf,Uid:ac22c161-aa7d-4a94-a2de-6ea0122df0fc,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:52.108376 containerd[1517]: time="2026-04-13T20:09:52.107408862Z" level=error msg="Failed to destroy network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.108376 containerd[1517]: time="2026-04-13T20:09:52.107424672Z" level=error msg="Failed to destroy network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.114018 containerd[1517]: time="2026-04-13T20:09:52.107837828Z" level=error msg="encountered an error cleaning up failed sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.114018 containerd[1517]: time="2026-04-13T20:09:52.109396262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fdf6b58f-sfc6f,Uid:d28dedcd-57de-44c6-aaf6-ca79c2dd6518,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.114018 containerd[1517]: time="2026-04-13T20:09:52.111540105Z" level=error msg="encountered an error cleaning up failed sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.114018 containerd[1517]: time="2026-04-13T20:09:52.111584986Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ljtsp,Uid:5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.114193 kubelet[2594]: E0413 20:09:52.111778 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.114193 kubelet[2594]: E0413 20:09:52.111825 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" Apr 13 20:09:52.114193 kubelet[2594]: E0413 20:09:52.111839 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" Apr 13 20:09:52.110797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e-shm.mount: Deactivated successfully. Apr 13 20:09:52.115432 kubelet[2594]: E0413 20:09:52.111883 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76fdf6b58f-sfc6f_calico-system(d28dedcd-57de-44c6-aaf6-ca79c2dd6518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76fdf6b58f-sfc6f_calico-system(d28dedcd-57de-44c6-aaf6-ca79c2dd6518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" podUID="d28dedcd-57de-44c6-aaf6-ca79c2dd6518" Apr 13 20:09:52.115432 kubelet[2594]: E0413 20:09:52.112134 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.115432 kubelet[2594]: E0413 20:09:52.112153 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ljtsp" Apr 13 20:09:52.110883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db-shm.mount: Deactivated successfully. Apr 13 20:09:52.115553 kubelet[2594]: E0413 20:09:52.112164 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-ljtsp" Apr 13 20:09:52.115553 kubelet[2594]: E0413 20:09:52.112189 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-ljtsp_kube-system(5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-ljtsp_kube-system(5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ljtsp" podUID="5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3" Apr 13 20:09:52.125632 containerd[1517]: time="2026-04-13T20:09:52.125596940Z" level=error msg="Failed to destroy network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.127222 containerd[1517]: time="2026-04-13T20:09:52.127080503Z" level=error msg="encountered an error cleaning up failed sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.127502 containerd[1517]: time="2026-04-13T20:09:52.127376847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-qlmsr,Uid:c2f871d5-f5b7-4d50-817f-a1527db6a36c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.128309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5-shm.mount: Deactivated successfully. Apr 13 20:09:52.129626 kubelet[2594]: E0413 20:09:52.128611 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.129800 kubelet[2594]: E0413 20:09:52.129782 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-799d497f46-qlmsr" Apr 13 20:09:52.130066 kubelet[2594]: E0413 20:09:52.129872 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-799d497f46-qlmsr" Apr 13 20:09:52.130066 kubelet[2594]: E0413 20:09:52.130019 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799d497f46-qlmsr_calico-system(c2f871d5-f5b7-4d50-817f-a1527db6a36c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799d497f46-qlmsr_calico-system(c2f871d5-f5b7-4d50-817f-a1527db6a36c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-799d497f46-qlmsr" podUID="c2f871d5-f5b7-4d50-817f-a1527db6a36c" Apr 13 20:09:52.141011 containerd[1517]: time="2026-04-13T20:09:52.140958635Z" level=error msg="Failed to destroy network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.142661 containerd[1517]: time="2026-04-13T20:09:52.141426052Z" level=error msg="encountered an error cleaning up failed sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.142661 containerd[1517]: time="2026-04-13T20:09:52.141466703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t47rf,Uid:ac22c161-aa7d-4a94-a2de-6ea0122df0fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.142784 kubelet[2594]: E0413 20:09:52.141631 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.142784 kubelet[2594]: E0413 20:09:52.141668 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-t47rf" Apr 13 20:09:52.142784 kubelet[2594]: E0413 20:09:52.141685 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-t47rf" Apr 13 20:09:52.142854 kubelet[2594]: E0413 20:09:52.141720 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-t47rf_calico-system(ac22c161-aa7d-4a94-a2de-6ea0122df0fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-t47rf_calico-system(ac22c161-aa7d-4a94-a2de-6ea0122df0fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-t47rf" podUID="ac22c161-aa7d-4a94-a2de-6ea0122df0fc" Apr 13 20:09:52.144571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e-shm.mount: Deactivated successfully. Apr 13 20:09:52.146547 containerd[1517]: time="2026-04-13T20:09:52.146493930Z" level=error msg="Failed to destroy network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.147047 containerd[1517]: time="2026-04-13T20:09:52.147028708Z" level=error msg="encountered an error cleaning up failed sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.147147 containerd[1517]: time="2026-04-13T20:09:52.147133590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rk9zl,Uid:7f7c1fad-6f07-437d-826f-867809010e65,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.148872 kubelet[2594]: E0413 20:09:52.147557 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.148872 kubelet[2594]: E0413 20:09:52.147644 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rk9zl" Apr 13 20:09:52.148872 kubelet[2594]: E0413 20:09:52.147664 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rk9zl" Apr 13 20:09:52.148981 kubelet[2594]: E0413 20:09:52.147704 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-rk9zl_kube-system(7f7c1fad-6f07-437d-826f-867809010e65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-rk9zl_kube-system(7f7c1fad-6f07-437d-826f-867809010e65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-rk9zl" podUID="7f7c1fad-6f07-437d-826f-867809010e65" Apr 13 20:09:52.150532 containerd[1517]: time="2026-04-13T20:09:52.150513081Z" level=error msg="Failed to destroy network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.151360 containerd[1517]: time="2026-04-13T20:09:52.151184222Z" level=error msg="encountered an error cleaning up failed sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.151360 containerd[1517]: time="2026-04-13T20:09:52.151224752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68455c66c8-cn7ch,Uid:06d5d9e8-07e3-4ab0-a898-9bb864910c9c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.151671 containerd[1517]: time="2026-04-13T20:09:52.151645349Z" level=error msg="Failed to destroy network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.151910 containerd[1517]: time="2026-04-13T20:09:52.151888812Z" level=error msg="encountered an error cleaning up failed sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.151934 containerd[1517]: time="2026-04-13T20:09:52.151919343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-xzwkq,Uid:f935bdd7-a5d9-4d40-8315-4896038786b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.152106 kubelet[2594]: E0413 20:09:52.152080 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.152106 kubelet[2594]: E0413 20:09:52.152110 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-799d497f46-xzwkq" Apr 13 20:09:52.152106 kubelet[2594]: E0413 20:09:52.152122 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-799d497f46-xzwkq" Apr 13 20:09:52.152216 kubelet[2594]: E0413 20:09:52.152152 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799d497f46-xzwkq_calico-system(f935bdd7-a5d9-4d40-8315-4896038786b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799d497f46-xzwkq_calico-system(f935bdd7-a5d9-4d40-8315-4896038786b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-799d497f46-xzwkq" podUID="f935bdd7-a5d9-4d40-8315-4896038786b4" Apr 13 20:09:52.152295 kubelet[2594]: E0413 20:09:52.152275 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.152345 kubelet[2594]: E0413 20:09:52.152293 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68455c66c8-cn7ch" Apr 13 20:09:52.152345 kubelet[2594]: E0413 20:09:52.152302 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68455c66c8-cn7ch" Apr 13 20:09:52.152859 kubelet[2594]: E0413 20:09:52.152840 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68455c66c8-cn7ch_calico-system(06d5d9e8-07e3-4ab0-a898-9bb864910c9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68455c66c8-cn7ch_calico-system(06d5d9e8-07e3-4ab0-a898-9bb864910c9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68455c66c8-cn7ch" podUID="06d5d9e8-07e3-4ab0-a898-9bb864910c9c" Apr 13 20:09:52.327164 systemd[1]: Created slice kubepods-besteffort-podc44a049a_7045_454f_8fa8_94f080c00249.slice - libcontainer container kubepods-besteffort-podc44a049a_7045_454f_8fa8_94f080c00249.slice. Apr 13 20:09:52.332954 containerd[1517]: time="2026-04-13T20:09:52.332808212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lb95t,Uid:c44a049a-7045-454f-8fa8-94f080c00249,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:52.427491 containerd[1517]: time="2026-04-13T20:09:52.427242987Z" level=error msg="Failed to destroy network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.427947 containerd[1517]: time="2026-04-13T20:09:52.427876017Z" level=error msg="encountered an error cleaning up failed sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.427999 containerd[1517]: time="2026-04-13T20:09:52.427952828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lb95t,Uid:c44a049a-7045-454f-8fa8-94f080c00249,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.429671 kubelet[2594]: E0413 20:09:52.428953 2594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.429671 kubelet[2594]: E0413 20:09:52.429021 2594 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:52.429671 kubelet[2594]: E0413 20:09:52.429054 2594 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lb95t" Apr 13 20:09:52.429833 kubelet[2594]: E0413 20:09:52.429156 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lb95t_calico-system(c44a049a-7045-454f-8fa8-94f080c00249)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lb95t_calico-system(c44a049a-7045-454f-8fa8-94f080c00249)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:52.438660 kubelet[2594]: I0413 20:09:52.438127 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:09:52.440778 containerd[1517]: time="2026-04-13T20:09:52.439694188Z" level=info msg="StopPodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\"" Apr 13 20:09:52.440778 containerd[1517]: time="2026-04-13T20:09:52.439908451Z" level=info msg="Ensure that sandbox 032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193 in task-service has been cleanup successfully" Apr 13 20:09:52.448382 kubelet[2594]: I0413 20:09:52.446392 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:09:52.448572 containerd[1517]: time="2026-04-13T20:09:52.448552233Z" level=info msg="StopPodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\"" Apr 13 20:09:52.449956 containerd[1517]: time="2026-04-13T20:09:52.449931974Z" level=info msg="Ensure that sandbox dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e in task-service has been cleanup successfully" Apr 13 20:09:52.450225 kubelet[2594]: I0413 20:09:52.450213 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:09:52.450890 containerd[1517]: time="2026-04-13T20:09:52.450839228Z" level=info msg="StopPodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\"" Apr 13 20:09:52.451091 containerd[1517]: time="2026-04-13T20:09:52.450969990Z" level=info msg="Ensure that sandbox 64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c in task-service has been cleanup successfully" Apr 13 20:09:52.454934 kubelet[2594]: I0413 20:09:52.454912 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:09:52.456185 containerd[1517]: time="2026-04-13T20:09:52.455338257Z" level=info msg="StopPodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\"" Apr 13 20:09:52.456536 containerd[1517]: time="2026-04-13T20:09:52.455684092Z" level=info msg="Ensure that sandbox 91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc in task-service has been cleanup successfully" Apr 13 20:09:52.460528 containerd[1517]: time="2026-04-13T20:09:52.460398214Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:09:52.461008 kubelet[2594]: I0413 20:09:52.460994 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:09:52.463577 containerd[1517]: time="2026-04-13T20:09:52.461845737Z" level=info msg="StopPodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\"" Apr 13 20:09:52.463577 containerd[1517]: time="2026-04-13T20:09:52.463397100Z" level=info msg="Ensure that sandbox 23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e in task-service has been cleanup successfully" Apr 13 20:09:52.474513 kubelet[2594]: I0413 20:09:52.474484 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:09:52.476737 containerd[1517]: time="2026-04-13T20:09:52.476693184Z" level=info msg="StopPodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\"" Apr 13 20:09:52.476876 containerd[1517]: time="2026-04-13T20:09:52.476858956Z" level=info msg="Ensure that sandbox 9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8 in task-service has been cleanup successfully" Apr 13 20:09:52.480085 kubelet[2594]: I0413 20:09:52.480066 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:09:52.481714 containerd[1517]: time="2026-04-13T20:09:52.481627179Z" level=info msg="StopPodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\"" Apr 13 20:09:52.482286 containerd[1517]: time="2026-04-13T20:09:52.482271469Z" level=info msg="Ensure that sandbox cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db in task-service has been cleanup successfully" Apr 13 20:09:52.482390 kubelet[2594]: I0413 20:09:52.482309 2594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:09:52.483038 containerd[1517]: time="2026-04-13T20:09:52.482766207Z" level=info msg="StopPodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\"" Apr 13 20:09:52.483038 containerd[1517]: time="2026-04-13T20:09:52.482871648Z" level=info msg="Ensure that sandbox 92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5 in task-service has been cleanup successfully" Apr 13 20:09:52.520335 containerd[1517]: time="2026-04-13T20:09:52.518256950Z" level=info msg="CreateContainer within sandbox \"cd166e7fc6e368f1ed0086743913ea325a940d71ddf8546c2dab8ce748d21976\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5\"" Apr 13 20:09:52.521982 containerd[1517]: time="2026-04-13T20:09:52.521964607Z" level=info msg="StartContainer for \"6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5\"" Apr 13 20:09:52.529215 containerd[1517]: time="2026-04-13T20:09:52.529181517Z" level=error msg="StopPodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" failed" error="failed to destroy network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.529497 kubelet[2594]: E0413 20:09:52.529466 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:09:52.529565 kubelet[2594]: E0413 20:09:52.529510 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193"} Apr 13 20:09:52.529597 kubelet[2594]: E0413 20:09:52.529575 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c44a049a-7045-454f-8fa8-94f080c00249\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.529657 kubelet[2594]: E0413 20:09:52.529641 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c44a049a-7045-454f-8fa8-94f080c00249\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lb95t" podUID="c44a049a-7045-454f-8fa8-94f080c00249" Apr 13 20:09:52.532469 containerd[1517]: time="2026-04-13T20:09:52.532435767Z" level=error msg="StopPodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" failed" error="failed to destroy network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.532565 kubelet[2594]: E0413 20:09:52.532544 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:09:52.532606 kubelet[2594]: E0413 20:09:52.532569 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e"} Apr 13 20:09:52.532606 kubelet[2594]: E0413 20:09:52.532587 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d28dedcd-57de-44c6-aaf6-ca79c2dd6518\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.532662 kubelet[2594]: E0413 20:09:52.532603 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d28dedcd-57de-44c6-aaf6-ca79c2dd6518\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" podUID="d28dedcd-57de-44c6-aaf6-ca79c2dd6518" Apr 13 20:09:52.545051 containerd[1517]: time="2026-04-13T20:09:52.544995229Z" level=error msg="StopPodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" failed" error="failed to destroy network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.545298 kubelet[2594]: E0413 20:09:52.545250 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:09:52.545352 kubelet[2594]: E0413 20:09:52.545311 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5"} Apr 13 20:09:52.545373 kubelet[2594]: E0413 20:09:52.545350 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c2f871d5-f5b7-4d50-817f-a1527db6a36c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.545435 kubelet[2594]: E0413 20:09:52.545373 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c2f871d5-f5b7-4d50-817f-a1527db6a36c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-799d497f46-qlmsr" podUID="c2f871d5-f5b7-4d50-817f-a1527db6a36c" Apr 13 20:09:52.549437 containerd[1517]: time="2026-04-13T20:09:52.549409467Z" level=error msg="StopPodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" failed" error="failed to destroy network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.549577 kubelet[2594]: E0413 20:09:52.549548 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:09:52.549612 kubelet[2594]: E0413 20:09:52.549587 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc"} Apr 13 20:09:52.549612 kubelet[2594]: E0413 20:09:52.549605 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.549672 kubelet[2594]: E0413 20:09:52.549621 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68455c66c8-cn7ch" podUID="06d5d9e8-07e3-4ab0-a898-9bb864910c9c" Apr 13 20:09:52.553011 containerd[1517]: time="2026-04-13T20:09:52.552942791Z" level=error msg="StopPodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" failed" error="failed to destroy network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.553280 kubelet[2594]: E0413 20:09:52.553237 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:09:52.553557 kubelet[2594]: E0413 20:09:52.553284 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c"} Apr 13 20:09:52.553557 kubelet[2594]: E0413 20:09:52.553305 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f7c1fad-6f07-437d-826f-867809010e65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.553557 kubelet[2594]: E0413 20:09:52.553361 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f7c1fad-6f07-437d-826f-867809010e65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-rk9zl" podUID="7f7c1fad-6f07-437d-826f-867809010e65" Apr 13 20:09:52.557012 containerd[1517]: time="2026-04-13T20:09:52.556802450Z" level=error msg="StopPodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" failed" error="failed to destroy network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.557057 kubelet[2594]: E0413 20:09:52.556906 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:09:52.557057 kubelet[2594]: E0413 20:09:52.556925 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e"} Apr 13 20:09:52.557057 kubelet[2594]: E0413 20:09:52.556941 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac22c161-aa7d-4a94-a2de-6ea0122df0fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.557057 kubelet[2594]: E0413 20:09:52.556956 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac22c161-aa7d-4a94-a2de-6ea0122df0fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-t47rf" podUID="ac22c161-aa7d-4a94-a2de-6ea0122df0fc" Apr 13 20:09:52.573368 containerd[1517]: time="2026-04-13T20:09:52.572648943Z" level=error msg="StopPodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" failed" error="failed to destroy network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.572901 systemd[1]: Started cri-containerd-6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5.scope - libcontainer container 6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5. Apr 13 20:09:52.573520 kubelet[2594]: E0413 20:09:52.572860 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:09:52.573520 kubelet[2594]: E0413 20:09:52.572901 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db"} Apr 13 20:09:52.573520 kubelet[2594]: E0413 20:09:52.572921 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.573520 kubelet[2594]: E0413 20:09:52.572942 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-ljtsp" podUID="5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3" Apr 13 20:09:52.576940 containerd[1517]: time="2026-04-13T20:09:52.576857047Z" level=error msg="StopPodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" failed" error="failed to destroy network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:09:52.577248 kubelet[2594]: E0413 20:09:52.577230 2594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:09:52.577359 kubelet[2594]: E0413 20:09:52.577346 2594 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8"} Apr 13 20:09:52.577464 kubelet[2594]: E0413 20:09:52.577420 2594 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f935bdd7-a5d9-4d40-8315-4896038786b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:09:52.577464 kubelet[2594]: E0413 20:09:52.577440 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f935bdd7-a5d9-4d40-8315-4896038786b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-799d497f46-xzwkq" podUID="f935bdd7-a5d9-4d40-8315-4896038786b4" Apr 13 20:09:52.600590 containerd[1517]: time="2026-04-13T20:09:52.600537239Z" level=info msg="StartContainer for \"6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5\" returns successfully" Apr 13 20:09:53.039795 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c-shm.mount: Deactivated successfully. Apr 13 20:09:53.040222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8-shm.mount: Deactivated successfully. Apr 13 20:09:53.040504 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc-shm.mount: Deactivated successfully. Apr 13 20:09:53.491052 containerd[1517]: time="2026-04-13T20:09:53.489448747Z" level=info msg="StopPodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\"" Apr 13 20:09:53.534845 systemd[1]: run-containerd-runc-k8s.io-6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5-runc.0ve7Kw.mount: Deactivated successfully. Apr 13 20:09:53.576226 kubelet[2594]: I0413 20:09:53.575540 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-xtx8w" podStartSLOduration=1.9741681899999999 podStartE2EDuration="19.575521192s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:09:34.834823232 +0000 UTC m=+17.599413490" lastFinishedPulling="2026-04-13 20:09:52.436176204 +0000 UTC m=+35.200766492" observedRunningTime="2026-04-13 20:09:53.541371512 +0000 UTC m=+36.305961780" watchObservedRunningTime="2026-04-13 20:09:53.575521192 +0000 UTC m=+36.340111450" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.574 [INFO][3883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.574 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" iface="eth0" netns="/var/run/netns/cni-ab030753-786c-9fa9-98b4-c7f4ea23f97d" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.575 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" iface="eth0" netns="/var/run/netns/cni-ab030753-786c-9fa9-98b4-c7f4ea23f97d" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.575 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" iface="eth0" netns="/var/run/netns/cni-ab030753-786c-9fa9-98b4-c7f4ea23f97d" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.575 [INFO][3883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.575 [INFO][3883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.595 [INFO][3912] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.596 [INFO][3912] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.596 [INFO][3912] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.600 [WARNING][3912] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.600 [INFO][3912] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.602 [INFO][3912] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:53.607441 containerd[1517]: 2026-04-13 20:09:53.605 [INFO][3883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:09:53.609954 containerd[1517]: time="2026-04-13T20:09:53.607548132Z" level=info msg="TearDown network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" successfully" Apr 13 20:09:53.609954 containerd[1517]: time="2026-04-13T20:09:53.607571572Z" level=info msg="StopPodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" returns successfully" Apr 13 20:09:53.610448 systemd[1]: run-netns-cni\x2dab030753\x2d786c\x2d9fa9\x2d98b4\x2dc7f4ea23f97d.mount: Deactivated successfully. Apr 13 20:09:53.679452 kubelet[2594]: I0413 20:09:53.679395 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-kube-api-access-mzr95\" (UniqueName: \"kubernetes.io/projected/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-kube-api-access-mzr95\") pod \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " Apr 13 20:09:53.679452 kubelet[2594]: I0413 20:09:53.679470 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-nginx-config\" (UniqueName: \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-nginx-config\") pod \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " Apr 13 20:09:53.679452 kubelet[2594]: I0413 20:09:53.679500 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-ca-bundle\") pod \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " Apr 13 20:09:53.679452 kubelet[2594]: I0413 20:09:53.679535 2594 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-backend-key-pair\") pod \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\" (UID: \"06d5d9e8-07e3-4ab0-a898-9bb864910c9c\") " Apr 13 20:09:53.683711 kubelet[2594]: I0413 20:09:53.683675 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-nginx-config" pod "06d5d9e8-07e3-4ab0-a898-9bb864910c9c" (UID: "06d5d9e8-07e3-4ab0-a898-9bb864910c9c"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:09:53.684482 kubelet[2594]: I0413 20:09:53.684443 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-ca-bundle" pod "06d5d9e8-07e3-4ab0-a898-9bb864910c9c" (UID: "06d5d9e8-07e3-4ab0-a898-9bb864910c9c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:09:53.687548 kubelet[2594]: I0413 20:09:53.687519 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-backend-key-pair" pod "06d5d9e8-07e3-4ab0-a898-9bb864910c9c" (UID: "06d5d9e8-07e3-4ab0-a898-9bb864910c9c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:09:53.689546 kubelet[2594]: I0413 20:09:53.689493 2594 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-kube-api-access-mzr95" pod "06d5d9e8-07e3-4ab0-a898-9bb864910c9c" (UID: "06d5d9e8-07e3-4ab0-a898-9bb864910c9c"). InnerVolumeSpecName "kube-api-access-mzr95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:09:53.690673 systemd[1]: var-lib-kubelet-pods-06d5d9e8\x2d07e3\x2d4ab0\x2da898\x2d9bb864910c9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzr95.mount: Deactivated successfully. Apr 13 20:09:53.690963 systemd[1]: var-lib-kubelet-pods-06d5d9e8\x2d07e3\x2d4ab0\x2da898\x2d9bb864910c9c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:09:53.780723 kubelet[2594]: I0413 20:09:53.780530 2594 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-nginx-config\") on node \"ci-4081-3-7-1-0f1354cb62\" DevicePath \"\"" Apr 13 20:09:53.780723 kubelet[2594]: I0413 20:09:53.780575 2594 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-ca-bundle\") on node \"ci-4081-3-7-1-0f1354cb62\" DevicePath \"\"" Apr 13 20:09:53.780723 kubelet[2594]: I0413 20:09:53.780595 2594 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-whisker-backend-key-pair\") on node \"ci-4081-3-7-1-0f1354cb62\" DevicePath \"\"" Apr 13 20:09:53.780723 kubelet[2594]: I0413 20:09:53.780611 2594 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mzr95\" (UniqueName: \"kubernetes.io/projected/06d5d9e8-07e3-4ab0-a898-9bb864910c9c-kube-api-access-mzr95\") on node \"ci-4081-3-7-1-0f1354cb62\" DevicePath \"\"" Apr 13 20:09:54.110347 kernel: calico-node[4008]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:09:54.501223 systemd-networkd[1398]: vxlan.calico: Link UP Apr 13 20:09:54.501230 systemd-networkd[1398]: vxlan.calico: Gained carrier Apr 13 20:09:54.506670 systemd[1]: Removed slice kubepods-besteffort-pod06d5d9e8_07e3_4ab0_a898_9bb864910c9c.slice - libcontainer container kubepods-besteffort-pod06d5d9e8_07e3_4ab0_a898_9bb864910c9c.slice. Apr 13 20:09:54.596105 systemd[1]: Created slice kubepods-besteffort-pod0f074ad9_6f88_4bf8_812d_2b3604535dc6.slice - libcontainer container kubepods-besteffort-pod0f074ad9_6f88_4bf8_812d_2b3604535dc6.slice. Apr 13 20:09:54.686191 kubelet[2594]: I0413 20:09:54.686033 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g465p\" (UniqueName: \"kubernetes.io/projected/0f074ad9-6f88-4bf8-812d-2b3604535dc6-kube-api-access-g465p\") pod \"whisker-8b9566d67-2fmk7\" (UID: \"0f074ad9-6f88-4bf8-812d-2b3604535dc6\") " pod="calico-system/whisker-8b9566d67-2fmk7" Apr 13 20:09:54.686191 kubelet[2594]: I0413 20:09:54.686106 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f074ad9-6f88-4bf8-812d-2b3604535dc6-whisker-backend-key-pair\") pod \"whisker-8b9566d67-2fmk7\" (UID: \"0f074ad9-6f88-4bf8-812d-2b3604535dc6\") " pod="calico-system/whisker-8b9566d67-2fmk7" Apr 13 20:09:54.686191 kubelet[2594]: I0413 20:09:54.686130 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0f074ad9-6f88-4bf8-812d-2b3604535dc6-nginx-config\") pod \"whisker-8b9566d67-2fmk7\" (UID: \"0f074ad9-6f88-4bf8-812d-2b3604535dc6\") " pod="calico-system/whisker-8b9566d67-2fmk7" Apr 13 20:09:54.686191 kubelet[2594]: I0413 20:09:54.686142 2594 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f074ad9-6f88-4bf8-812d-2b3604535dc6-whisker-ca-bundle\") pod \"whisker-8b9566d67-2fmk7\" (UID: \"0f074ad9-6f88-4bf8-812d-2b3604535dc6\") " pod="calico-system/whisker-8b9566d67-2fmk7" Apr 13 20:09:54.902850 containerd[1517]: time="2026-04-13T20:09:54.902756757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8b9566d67-2fmk7,Uid:0f074ad9-6f88-4bf8-812d-2b3604535dc6,Namespace:calico-system,Attempt:0,}" Apr 13 20:09:55.002303 systemd-networkd[1398]: calieb949dfc0a5: Link UP Apr 13 20:09:55.003401 systemd-networkd[1398]: calieb949dfc0a5: Gained carrier Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.943 [INFO][4155] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0 whisker-8b9566d67- calico-system 0f074ad9-6f88-4bf8-812d-2b3604535dc6 897 0 2026-04-13 20:09:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8b9566d67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 whisker-8b9566d67-2fmk7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieb949dfc0a5 [] [] }} ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.943 [INFO][4155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.965 [INFO][4167] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" HandleID="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.970 [INFO][4167] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" HandleID="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277200), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"whisker-8b9566d67-2fmk7", "timestamp":"2026-04-13 20:09:54.965178827 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e6f20)} Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.970 [INFO][4167] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.970 [INFO][4167] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.970 [INFO][4167] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.972 [INFO][4167] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.976 [INFO][4167] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.980 [INFO][4167] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.982 [INFO][4167] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.983 [INFO][4167] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.983 [INFO][4167] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.986 [INFO][4167] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23 Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.989 [INFO][4167] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.993 [INFO][4167] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.129/26] block=192.168.72.128/26 handle="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.993 [INFO][4167] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.129/26] handle="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.993 [INFO][4167] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:09:55.016073 containerd[1517]: 2026-04-13 20:09:54.993 [INFO][4167] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.129/26] IPv6=[] ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" HandleID="k8s-pod-network.5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.016781 containerd[1517]: 2026-04-13 20:09:54.998 [INFO][4155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0", GenerateName:"whisker-8b9566d67-", Namespace:"calico-system", SelfLink:"", UID:"0f074ad9-6f88-4bf8-812d-2b3604535dc6", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8b9566d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"whisker-8b9566d67-2fmk7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb949dfc0a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:55.016781 containerd[1517]: 2026-04-13 20:09:54.998 [INFO][4155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.129/32] ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.016781 containerd[1517]: 2026-04-13 20:09:54.998 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb949dfc0a5 ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.016781 containerd[1517]: 2026-04-13 20:09:55.004 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.016781 containerd[1517]: 2026-04-13 20:09:55.004 [INFO][4155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0", GenerateName:"whisker-8b9566d67-", Namespace:"calico-system", SelfLink:"", UID:"0f074ad9-6f88-4bf8-812d-2b3604535dc6", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8b9566d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23", Pod:"whisker-8b9566d67-2fmk7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.72.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb949dfc0a5", MAC:"86:ca:e5:f6:4d:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:09:55.016781 containerd[1517]: 2026-04-13 20:09:55.011 [INFO][4155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23" Namespace="calico-system" Pod="whisker-8b9566d67-2fmk7" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--8b9566d67--2fmk7-eth0" Apr 13 20:09:55.035695 containerd[1517]: time="2026-04-13T20:09:55.034544221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:09:55.035695 containerd[1517]: time="2026-04-13T20:09:55.034581031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:09:55.035695 containerd[1517]: time="2026-04-13T20:09:55.034590762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:55.035695 containerd[1517]: time="2026-04-13T20:09:55.034648612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:09:55.062762 systemd[1]: Started cri-containerd-5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23.scope - libcontainer container 5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23. Apr 13 20:09:55.099996 containerd[1517]: time="2026-04-13T20:09:55.099903985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8b9566d67-2fmk7,Uid:0f074ad9-6f88-4bf8-812d-2b3604535dc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23\"" Apr 13 20:09:55.104412 containerd[1517]: time="2026-04-13T20:09:55.104388302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:09:55.321900 kubelet[2594]: I0413 20:09:55.321841 2594 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="06d5d9e8-07e3-4ab0-a898-9bb864910c9c" path="/var/lib/kubelet/pods/06d5d9e8-07e3-4ab0-a898-9bb864910c9c/volumes" Apr 13 20:09:56.134152 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Apr 13 20:09:56.773667 systemd-networkd[1398]: calieb949dfc0a5: Gained IPv6LL Apr 13 20:09:57.442402 containerd[1517]: time="2026-04-13T20:09:57.442343234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:57.443553 containerd[1517]: time="2026-04-13T20:09:57.443353795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:09:57.445387 containerd[1517]: time="2026-04-13T20:09:57.444498438Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:57.446386 containerd[1517]: time="2026-04-13T20:09:57.446355388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:09:57.447138 containerd[1517]: time="2026-04-13T20:09:57.446831914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.342419422s" Apr 13 20:09:57.447138 containerd[1517]: time="2026-04-13T20:09:57.446867154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:09:57.450701 containerd[1517]: time="2026-04-13T20:09:57.450670606Z" level=info msg="CreateContainer within sandbox \"5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:09:57.464384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1844194388.mount: Deactivated successfully. Apr 13 20:09:57.471253 containerd[1517]: time="2026-04-13T20:09:57.471133713Z" level=info msg="CreateContainer within sandbox \"5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4350ea72f88e1d5d7b4f72f098d3f6b3f5c9927bad491046b182e52a6fad2308\"" Apr 13 20:09:57.474555 containerd[1517]: time="2026-04-13T20:09:57.474508831Z" level=info msg="StartContainer for \"4350ea72f88e1d5d7b4f72f098d3f6b3f5c9927bad491046b182e52a6fad2308\"" Apr 13 20:09:57.511440 systemd[1]: Started cri-containerd-4350ea72f88e1d5d7b4f72f098d3f6b3f5c9927bad491046b182e52a6fad2308.scope - libcontainer container 4350ea72f88e1d5d7b4f72f098d3f6b3f5c9927bad491046b182e52a6fad2308. Apr 13 20:09:57.546487 containerd[1517]: time="2026-04-13T20:09:57.546455137Z" level=info msg="StartContainer for \"4350ea72f88e1d5d7b4f72f098d3f6b3f5c9927bad491046b182e52a6fad2308\" returns successfully" Apr 13 20:09:57.549015 containerd[1517]: time="2026-04-13T20:09:57.548755502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:10:00.031922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1440068926.mount: Deactivated successfully. Apr 13 20:10:00.050892 containerd[1517]: time="2026-04-13T20:10:00.050841545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:00.052159 containerd[1517]: time="2026-04-13T20:10:00.052026365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:10:00.053362 containerd[1517]: time="2026-04-13T20:10:00.053242307Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:00.055861 containerd[1517]: time="2026-04-13T20:10:00.055825500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:00.056704 containerd[1517]: time="2026-04-13T20:10:00.056267084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.507487101s" Apr 13 20:10:00.056704 containerd[1517]: time="2026-04-13T20:10:00.056304295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:10:00.060517 containerd[1517]: time="2026-04-13T20:10:00.060398622Z" level=info msg="CreateContainer within sandbox \"5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:10:00.077639 containerd[1517]: time="2026-04-13T20:10:00.077588279Z" level=info msg="CreateContainer within sandbox \"5b3eb77ae60f5751ce404094243f22dbdd26052dd2b35aeba50f376b99376d23\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4ad2d388e154a83deb5407b4f02675cb1fa07fca4959cfe10cef7f320d00591d\"" Apr 13 20:10:00.078136 containerd[1517]: time="2026-04-13T20:10:00.078099374Z" level=info msg="StartContainer for \"4ad2d388e154a83deb5407b4f02675cb1fa07fca4959cfe10cef7f320d00591d\"" Apr 13 20:10:00.104437 systemd[1]: Started cri-containerd-4ad2d388e154a83deb5407b4f02675cb1fa07fca4959cfe10cef7f320d00591d.scope - libcontainer container 4ad2d388e154a83deb5407b4f02675cb1fa07fca4959cfe10cef7f320d00591d. Apr 13 20:10:00.145425 containerd[1517]: time="2026-04-13T20:10:00.145255607Z" level=info msg="StartContainer for \"4ad2d388e154a83deb5407b4f02675cb1fa07fca4959cfe10cef7f320d00591d\" returns successfully" Apr 13 20:10:00.527605 kubelet[2594]: I0413 20:10:00.527520 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-8b9566d67-2fmk7" podStartSLOduration=1.573328238 podStartE2EDuration="6.527502168s" podCreationTimestamp="2026-04-13 20:09:54 +0000 UTC" firstStartedPulling="2026-04-13 20:09:55.10269317 +0000 UTC m=+37.867283438" lastFinishedPulling="2026-04-13 20:10:00.0568671 +0000 UTC m=+42.821457368" observedRunningTime="2026-04-13 20:10:00.525984024 +0000 UTC m=+43.290574322" watchObservedRunningTime="2026-04-13 20:10:00.527502168 +0000 UTC m=+43.292092466" Apr 13 20:10:03.319354 containerd[1517]: time="2026-04-13T20:10:03.318995804Z" level=info msg="StopPodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\"" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.380 [INFO][4363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.381 [INFO][4363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" iface="eth0" netns="/var/run/netns/cni-63516745-0c65-60a0-408a-785a7fde5e5b" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.381 [INFO][4363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" iface="eth0" netns="/var/run/netns/cni-63516745-0c65-60a0-408a-785a7fde5e5b" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.381 [INFO][4363] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" iface="eth0" netns="/var/run/netns/cni-63516745-0c65-60a0-408a-785a7fde5e5b" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.381 [INFO][4363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.381 [INFO][4363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.406 [INFO][4371] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.406 [INFO][4371] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.406 [INFO][4371] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.411 [WARNING][4371] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.411 [INFO][4371] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.413 [INFO][4371] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:03.418720 containerd[1517]: 2026-04-13 20:10:03.416 [INFO][4363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:03.420682 containerd[1517]: time="2026-04-13T20:10:03.420414967Z" level=info msg="TearDown network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" successfully" Apr 13 20:10:03.420682 containerd[1517]: time="2026-04-13T20:10:03.420465588Z" level=info msg="StopPodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" returns successfully" Apr 13 20:10:03.422057 systemd[1]: run-netns-cni\x2d63516745\x2d0c65\x2d60a0\x2d408a\x2d785a7fde5e5b.mount: Deactivated successfully. Apr 13 20:10:03.424560 containerd[1517]: time="2026-04-13T20:10:03.424408347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-qlmsr,Uid:c2f871d5-f5b7-4d50-817f-a1527db6a36c,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:03.541958 systemd-networkd[1398]: cali6c587cb79c7: Link UP Apr 13 20:10:03.545891 systemd-networkd[1398]: cali6c587cb79c7: Gained carrier Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.473 [INFO][4377] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0 calico-apiserver-799d497f46- calico-system c2f871d5-f5b7-4d50-817f-a1527db6a36c 940 0 2026-04-13 20:09:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799d497f46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 calico-apiserver-799d497f46-qlmsr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6c587cb79c7 [] [] }} ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.474 [INFO][4377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.501 [INFO][4390] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" HandleID="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.507 [INFO][4390] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" HandleID="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036f520), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"calico-apiserver-799d497f46-qlmsr", "timestamp":"2026-04-13 20:10:03.501154235 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00027f080)} Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.507 [INFO][4390] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.507 [INFO][4390] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.507 [INFO][4390] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.509 [INFO][4390] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.513 [INFO][4390] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.519 [INFO][4390] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.521 [INFO][4390] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.523 [INFO][4390] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.523 [INFO][4390] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.525 [INFO][4390] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3 Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.529 [INFO][4390] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.536 [INFO][4390] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.130/26] block=192.168.72.128/26 handle="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.536 [INFO][4390] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.130/26] handle="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.536 [INFO][4390] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:03.562151 containerd[1517]: 2026-04-13 20:10:03.536 [INFO][4390] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.130/26] IPv6=[] ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" HandleID="k8s-pod-network.ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.562857 containerd[1517]: 2026-04-13 20:10:03.539 [INFO][4377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"c2f871d5-f5b7-4d50-817f-a1527db6a36c", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"calico-apiserver-799d497f46-qlmsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6c587cb79c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:03.562857 containerd[1517]: 2026-04-13 20:10:03.539 [INFO][4377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.130/32] ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.562857 containerd[1517]: 2026-04-13 20:10:03.539 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c587cb79c7 ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.562857 containerd[1517]: 2026-04-13 20:10:03.544 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.562857 containerd[1517]: 2026-04-13 20:10:03.547 [INFO][4377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"c2f871d5-f5b7-4d50-817f-a1527db6a36c", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3", Pod:"calico-apiserver-799d497f46-qlmsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6c587cb79c7", MAC:"ca:3d:dd:a5:ac:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:03.562857 containerd[1517]: 2026-04-13 20:10:03.557 [INFO][4377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3" Namespace="calico-system" Pod="calico-apiserver-799d497f46-qlmsr" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:03.583491 containerd[1517]: time="2026-04-13T20:10:03.583229683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:03.583598 containerd[1517]: time="2026-04-13T20:10:03.583366944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:03.583598 containerd[1517]: time="2026-04-13T20:10:03.583376824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:03.583598 containerd[1517]: time="2026-04-13T20:10:03.583464364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:03.610688 systemd[1]: Started cri-containerd-ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3.scope - libcontainer container ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3. Apr 13 20:10:03.646153 containerd[1517]: time="2026-04-13T20:10:03.646041925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-qlmsr,Uid:c2f871d5-f5b7-4d50-817f-a1527db6a36c,Namespace:calico-system,Attempt:1,} returns sandbox id \"ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3\"" Apr 13 20:10:03.648169 containerd[1517]: time="2026-04-13T20:10:03.648029950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:10:04.317874 containerd[1517]: time="2026-04-13T20:10:04.317801161Z" level=info msg="StopPodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\"" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.382 [INFO][4467] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.383 [INFO][4467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" iface="eth0" netns="/var/run/netns/cni-07823ab7-1612-2dc6-a10f-60ba410bb075" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.384 [INFO][4467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" iface="eth0" netns="/var/run/netns/cni-07823ab7-1612-2dc6-a10f-60ba410bb075" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.384 [INFO][4467] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" iface="eth0" netns="/var/run/netns/cni-07823ab7-1612-2dc6-a10f-60ba410bb075" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.384 [INFO][4467] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.384 [INFO][4467] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.405 [INFO][4475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.406 [INFO][4475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.406 [INFO][4475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.410 [WARNING][4475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.410 [INFO][4475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.411 [INFO][4475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:04.415738 containerd[1517]: 2026-04-13 20:10:04.413 [INFO][4467] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:04.416794 containerd[1517]: time="2026-04-13T20:10:04.416733569Z" level=info msg="TearDown network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" successfully" Apr 13 20:10:04.416883 containerd[1517]: time="2026-04-13T20:10:04.416765149Z" level=info msg="StopPodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" returns successfully" Apr 13 20:10:04.420623 containerd[1517]: time="2026-04-13T20:10:04.420543226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t47rf,Uid:ac22c161-aa7d-4a94-a2de-6ea0122df0fc,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:04.421565 systemd[1]: run-netns-cni\x2d07823ab7\x2d1612\x2d2dc6\x2da10f\x2d60ba410bb075.mount: Deactivated successfully. Apr 13 20:10:04.522245 systemd-networkd[1398]: cali4aed23e021b: Link UP Apr 13 20:10:04.525349 systemd-networkd[1398]: cali4aed23e021b: Gained carrier Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.465 [INFO][4482] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0 goldmane-9f7667bb8- calico-system ac22c161-aa7d-4a94-a2de-6ea0122df0fc 947 0 2026-04-13 20:09:34 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 goldmane-9f7667bb8-t47rf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4aed23e021b [] [] }} ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.465 [INFO][4482] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.484 [INFO][4493] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" HandleID="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.489 [INFO][4493] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" HandleID="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"goldmane-9f7667bb8-t47rf", "timestamp":"2026-04-13 20:10:04.48493088 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e0f20)} Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.490 [INFO][4493] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.490 [INFO][4493] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.490 [INFO][4493] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.492 [INFO][4493] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.496 [INFO][4493] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.502 [INFO][4493] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.503 [INFO][4493] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.505 [INFO][4493] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.505 [INFO][4493] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.507 [INFO][4493] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.510 [INFO][4493] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.515 [INFO][4493] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.131/26] block=192.168.72.128/26 handle="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.515 [INFO][4493] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.131/26] handle="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.515 [INFO][4493] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:04.542617 containerd[1517]: 2026-04-13 20:10:04.515 [INFO][4493] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.131/26] IPv6=[] ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" HandleID="k8s-pod-network.90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.543060 containerd[1517]: 2026-04-13 20:10:04.518 [INFO][4482] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ac22c161-aa7d-4a94-a2de-6ea0122df0fc", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"goldmane-9f7667bb8-t47rf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aed23e021b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:04.543060 containerd[1517]: 2026-04-13 20:10:04.518 [INFO][4482] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.131/32] ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.543060 containerd[1517]: 2026-04-13 20:10:04.519 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4aed23e021b ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.543060 containerd[1517]: 2026-04-13 20:10:04.520 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.543060 containerd[1517]: 2026-04-13 20:10:04.526 [INFO][4482] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ac22c161-aa7d-4a94-a2de-6ea0122df0fc", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a", Pod:"goldmane-9f7667bb8-t47rf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aed23e021b", MAC:"a6:0f:ad:9d:2c:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:04.543060 containerd[1517]: 2026-04-13 20:10:04.538 [INFO][4482] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a" Namespace="calico-system" Pod="goldmane-9f7667bb8-t47rf" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:04.561135 containerd[1517]: time="2026-04-13T20:10:04.560847025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:04.561135 containerd[1517]: time="2026-04-13T20:10:04.560892656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:04.561135 containerd[1517]: time="2026-04-13T20:10:04.560900486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:04.561135 containerd[1517]: time="2026-04-13T20:10:04.561024997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:04.589191 systemd[1]: Started cri-containerd-90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a.scope - libcontainer container 90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a. Apr 13 20:10:04.627586 containerd[1517]: time="2026-04-13T20:10:04.627554776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-t47rf,Uid:ac22c161-aa7d-4a94-a2de-6ea0122df0fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a\"" Apr 13 20:10:05.029693 systemd-networkd[1398]: cali6c587cb79c7: Gained IPv6LL Apr 13 20:10:05.331484 containerd[1517]: time="2026-04-13T20:10:05.330222858Z" level=info msg="StopPodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\"" Apr 13 20:10:05.344925 containerd[1517]: time="2026-04-13T20:10:05.344859365Z" level=info msg="StopPodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\"" Apr 13 20:10:05.346261 containerd[1517]: time="2026-04-13T20:10:05.346177973Z" level=info msg="StopPodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\"" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.448 [INFO][4602] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.448 [INFO][4602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" iface="eth0" netns="/var/run/netns/cni-14dc4ac3-6a72-441b-90e8-372bf960b459" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.449 [INFO][4602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" iface="eth0" netns="/var/run/netns/cni-14dc4ac3-6a72-441b-90e8-372bf960b459" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.449 [INFO][4602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" iface="eth0" netns="/var/run/netns/cni-14dc4ac3-6a72-441b-90e8-372bf960b459" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.449 [INFO][4602] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.449 [INFO][4602] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.482 [INFO][4621] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.484 [INFO][4621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.484 [INFO][4621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.490 [WARNING][4621] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.491 [INFO][4621] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.492 [INFO][4621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.500542 containerd[1517]: 2026-04-13 20:10:05.496 [INFO][4602] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:05.504526 containerd[1517]: time="2026-04-13T20:10:05.503770816Z" level=info msg="TearDown network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" successfully" Apr 13 20:10:05.504526 containerd[1517]: time="2026-04-13T20:10:05.503796286Z" level=info msg="StopPodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" returns successfully" Apr 13 20:10:05.504271 systemd[1]: run-netns-cni\x2d14dc4ac3\x2d6a72\x2d441b\x2d90e8\x2d372bf960b459.mount: Deactivated successfully. Apr 13 20:10:05.509479 containerd[1517]: time="2026-04-13T20:10:05.508720598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-xzwkq,Uid:f935bdd7-a5d9-4d40-8315-4896038786b4,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.454 [INFO][4597] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.456 [INFO][4597] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" iface="eth0" netns="/var/run/netns/cni-5589b5a1-e74e-0081-a71e-736ebaddd47a" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.456 [INFO][4597] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" iface="eth0" netns="/var/run/netns/cni-5589b5a1-e74e-0081-a71e-736ebaddd47a" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.456 [INFO][4597] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" iface="eth0" netns="/var/run/netns/cni-5589b5a1-e74e-0081-a71e-736ebaddd47a" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.456 [INFO][4597] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.456 [INFO][4597] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.497 [INFO][4632] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.497 [INFO][4632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.497 [INFO][4632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.506 [WARNING][4632] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.507 [INFO][4632] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.509 [INFO][4632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.519647 containerd[1517]: 2026-04-13 20:10:05.511 [INFO][4597] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:05.521375 containerd[1517]: time="2026-04-13T20:10:05.521337472Z" level=info msg="TearDown network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" successfully" Apr 13 20:10:05.521468 containerd[1517]: time="2026-04-13T20:10:05.521457083Z" level=info msg="StopPodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" returns successfully" Apr 13 20:10:05.528200 systemd[1]: run-netns-cni\x2d5589b5a1\x2de74e\x2d0081\x2da71e\x2d736ebaddd47a.mount: Deactivated successfully. Apr 13 20:10:05.529360 containerd[1517]: time="2026-04-13T20:10:05.528541389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fdf6b58f-sfc6f,Uid:d28dedcd-57de-44c6-aaf6-ca79c2dd6518,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:05.541659 systemd-networkd[1398]: cali4aed23e021b: Gained IPv6LL Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.445 [INFO][4603] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.445 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" iface="eth0" netns="/var/run/netns/cni-d30c18e3-e75d-1f3f-e1a9-6104e91c3a85" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.446 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" iface="eth0" netns="/var/run/netns/cni-d30c18e3-e75d-1f3f-e1a9-6104e91c3a85" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.455 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" iface="eth0" netns="/var/run/netns/cni-d30c18e3-e75d-1f3f-e1a9-6104e91c3a85" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.455 [INFO][4603] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.455 [INFO][4603] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.516 [INFO][4626] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.517 [INFO][4626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.517 [INFO][4626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.525 [WARNING][4626] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.525 [INFO][4626] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.527 [INFO][4626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.546886 containerd[1517]: 2026-04-13 20:10:05.539 [INFO][4603] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:05.550893 systemd[1]: run-netns-cni\x2dd30c18e3\x2de75d\x2d1f3f\x2de1a9\x2d6104e91c3a85.mount: Deactivated successfully. Apr 13 20:10:05.551359 containerd[1517]: time="2026-04-13T20:10:05.551217609Z" level=info msg="TearDown network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" successfully" Apr 13 20:10:05.551359 containerd[1517]: time="2026-04-13T20:10:05.551266530Z" level=info msg="StopPodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" returns successfully" Apr 13 20:10:05.557986 containerd[1517]: time="2026-04-13T20:10:05.556735076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rk9zl,Uid:7f7c1fad-6f07-437d-826f-867809010e65,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:05.673067 systemd-networkd[1398]: calid3510870b70: Link UP Apr 13 20:10:05.674469 systemd-networkd[1398]: calid3510870b70: Gained carrier Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.569 [INFO][4641] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0 calico-apiserver-799d497f46- calico-system f935bdd7-a5d9-4d40-8315-4896038786b4 958 0 2026-04-13 20:09:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799d497f46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 calico-apiserver-799d497f46-xzwkq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid3510870b70 [] [] }} ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.569 [INFO][4641] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.604 [INFO][4659] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" HandleID="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.612 [INFO][4659] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" HandleID="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000405710), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"calico-apiserver-799d497f46-xzwkq", "timestamp":"2026-04-13 20:10:05.604586302 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000410dc0)} Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.612 [INFO][4659] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.612 [INFO][4659] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.613 [INFO][4659] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.615 [INFO][4659] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.621 [INFO][4659] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.630 [INFO][4659] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.636 [INFO][4659] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.639 [INFO][4659] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.639 [INFO][4659] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.641 [INFO][4659] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.649 [INFO][4659] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.656 [INFO][4659] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.132/26] block=192.168.72.128/26 handle="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.656 [INFO][4659] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.132/26] handle="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.656 [INFO][4659] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.694798 containerd[1517]: 2026-04-13 20:10:05.656 [INFO][4659] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.132/26] IPv6=[] ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" HandleID="k8s-pod-network.8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.695229 containerd[1517]: 2026-04-13 20:10:05.661 [INFO][4641] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"f935bdd7-a5d9-4d40-8315-4896038786b4", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"calico-apiserver-799d497f46-xzwkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid3510870b70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.695229 containerd[1517]: 2026-04-13 20:10:05.661 [INFO][4641] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.132/32] ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.695229 containerd[1517]: 2026-04-13 20:10:05.661 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid3510870b70 ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.695229 containerd[1517]: 2026-04-13 20:10:05.675 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.695229 containerd[1517]: 2026-04-13 20:10:05.676 [INFO][4641] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"f935bdd7-a5d9-4d40-8315-4896038786b4", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd", Pod:"calico-apiserver-799d497f46-xzwkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid3510870b70", MAC:"c6:4e:a5:c7:3a:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.695229 containerd[1517]: 2026-04-13 20:10:05.687 [INFO][4641] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd" Namespace="calico-system" Pod="calico-apiserver-799d497f46-xzwkq" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:05.756923 containerd[1517]: time="2026-04-13T20:10:05.756868130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:05.757104 containerd[1517]: time="2026-04-13T20:10:05.757079661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:05.757196 containerd[1517]: time="2026-04-13T20:10:05.757180472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.757764 containerd[1517]: time="2026-04-13T20:10:05.757721425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.778605 systemd-networkd[1398]: caliaad9e7ecb18: Link UP Apr 13 20:10:05.780776 systemd-networkd[1398]: caliaad9e7ecb18: Gained carrier Apr 13 20:10:05.796430 systemd[1]: Started cri-containerd-8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd.scope - libcontainer container 8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd. Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.639 [INFO][4664] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0 coredns-7d764666f9- kube-system 7f7c1fad-6f07-437d-826f-867809010e65 957 0 2026-04-13 20:09:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 coredns-7d764666f9-rk9zl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaad9e7ecb18 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.640 [INFO][4664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.707 [INFO][4694] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" HandleID="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.724 [INFO][4694] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" HandleID="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"coredns-7d764666f9-rk9zl", "timestamp":"2026-04-13 20:10:05.707480003 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003654a0)} Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.724 [INFO][4694] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.724 [INFO][4694] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.724 [INFO][4694] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.728 [INFO][4694] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.735 [INFO][4694] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.745 [INFO][4694] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.747 [INFO][4694] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.749 [INFO][4694] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.749 [INFO][4694] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.751 [INFO][4694] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312 Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.756 [INFO][4694] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.763 [INFO][4694] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.133/26] block=192.168.72.128/26 handle="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.763 [INFO][4694] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.133/26] handle="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.763 [INFO][4694] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.810386 containerd[1517]: 2026-04-13 20:10:05.763 [INFO][4694] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.133/26] IPv6=[] ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" HandleID="k8s-pod-network.c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.810980 containerd[1517]: 2026-04-13 20:10:05.768 [INFO][4664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f7c1fad-6f07-437d-826f-867809010e65", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"coredns-7d764666f9-rk9zl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad9e7ecb18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.810980 containerd[1517]: 2026-04-13 20:10:05.768 [INFO][4664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.133/32] ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.810980 containerd[1517]: 2026-04-13 20:10:05.768 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaad9e7ecb18 ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.810980 containerd[1517]: 2026-04-13 20:10:05.782 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.810980 containerd[1517]: 2026-04-13 20:10:05.789 [INFO][4664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f7c1fad-6f07-437d-826f-867809010e65", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312", Pod:"coredns-7d764666f9-rk9zl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad9e7ecb18", MAC:"66:89:f4:0d:a8:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.811354 containerd[1517]: 2026-04-13 20:10:05.801 [INFO][4664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312" Namespace="kube-system" Pod="coredns-7d764666f9-rk9zl" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:05.865016 containerd[1517]: time="2026-04-13T20:10:05.863600356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:05.865016 containerd[1517]: time="2026-04-13T20:10:05.863652586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:05.865016 containerd[1517]: time="2026-04-13T20:10:05.863663546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.865016 containerd[1517]: time="2026-04-13T20:10:05.863733867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.886935 systemd[1]: Started cri-containerd-c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312.scope - libcontainer container c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312. Apr 13 20:10:05.889959 systemd-networkd[1398]: califc32aa2bc96: Link UP Apr 13 20:10:05.890119 systemd-networkd[1398]: califc32aa2bc96: Gained carrier Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.657 [INFO][4657] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0 calico-kube-controllers-76fdf6b58f- calico-system d28dedcd-57de-44c6-aaf6-ca79c2dd6518 959 0 2026-04-13 20:09:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76fdf6b58f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 calico-kube-controllers-76fdf6b58f-sfc6f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califc32aa2bc96 [] [] }} ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.658 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.736 [INFO][4700] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" HandleID="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.744 [INFO][4700] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" HandleID="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000410080), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"calico-kube-controllers-76fdf6b58f-sfc6f", "timestamp":"2026-04-13 20:10:05.736083912 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000d4580)} Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.744 [INFO][4700] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.763 [INFO][4700] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.763 [INFO][4700] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.829 [INFO][4700] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.837 [INFO][4700] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.843 [INFO][4700] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.845 [INFO][4700] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.848 [INFO][4700] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.848 [INFO][4700] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.849 [INFO][4700] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.859 [INFO][4700] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.872 [INFO][4700] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.134/26] block=192.168.72.128/26 handle="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.872 [INFO][4700] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.134/26] handle="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.872 [INFO][4700] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:05.910388 containerd[1517]: 2026-04-13 20:10:05.872 [INFO][4700] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.134/26] IPv6=[] ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" HandleID="k8s-pod-network.5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.910793 containerd[1517]: 2026-04-13 20:10:05.884 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0", GenerateName:"calico-kube-controllers-76fdf6b58f-", Namespace:"calico-system", SelfLink:"", UID:"d28dedcd-57de-44c6-aaf6-ca79c2dd6518", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fdf6b58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"calico-kube-controllers-76fdf6b58f-sfc6f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc32aa2bc96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.910793 containerd[1517]: 2026-04-13 20:10:05.884 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.134/32] ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.910793 containerd[1517]: 2026-04-13 20:10:05.885 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc32aa2bc96 ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.910793 containerd[1517]: 2026-04-13 20:10:05.888 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.910793 containerd[1517]: 2026-04-13 20:10:05.889 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0", GenerateName:"calico-kube-controllers-76fdf6b58f-", Namespace:"calico-system", SelfLink:"", UID:"d28dedcd-57de-44c6-aaf6-ca79c2dd6518", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fdf6b58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d", Pod:"calico-kube-controllers-76fdf6b58f-sfc6f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc32aa2bc96", MAC:"22:24:48:0a:73:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:05.910793 containerd[1517]: 2026-04-13 20:10:05.903 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d" Namespace="calico-system" Pod="calico-kube-controllers-76fdf6b58f-sfc6f" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:05.916374 containerd[1517]: time="2026-04-13T20:10:05.916206834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799d497f46-xzwkq,Uid:f935bdd7-a5d9-4d40-8315-4896038786b4,Namespace:calico-system,Attempt:1,} returns sandbox id \"8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd\"" Apr 13 20:10:05.951012 containerd[1517]: time="2026-04-13T20:10:05.950749112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:05.955130 containerd[1517]: time="2026-04-13T20:10:05.954385906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:05.955130 containerd[1517]: time="2026-04-13T20:10:05.954401996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.955130 containerd[1517]: time="2026-04-13T20:10:05.954474257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:05.973254 containerd[1517]: time="2026-04-13T20:10:05.973171600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rk9zl,Uid:7f7c1fad-6f07-437d-826f-867809010e65,Namespace:kube-system,Attempt:1,} returns sandbox id \"c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312\"" Apr 13 20:10:05.980828 containerd[1517]: time="2026-04-13T20:10:05.980794111Z" level=info msg="CreateContainer within sandbox \"c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:05.994787 systemd[1]: Started cri-containerd-5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d.scope - libcontainer container 5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d. Apr 13 20:10:06.002697 containerd[1517]: time="2026-04-13T20:10:06.002587564Z" level=info msg="CreateContainer within sandbox \"c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c8069b1f681741f192c5d12509cf5f93f317df98bfab8ec6d48b37c4b0b5a7bc\"" Apr 13 20:10:06.004083 containerd[1517]: time="2026-04-13T20:10:06.003895792Z" level=info msg="StartContainer for \"c8069b1f681741f192c5d12509cf5f93f317df98bfab8ec6d48b37c4b0b5a7bc\"" Apr 13 20:10:06.060494 systemd[1]: Started cri-containerd-c8069b1f681741f192c5d12509cf5f93f317df98bfab8ec6d48b37c4b0b5a7bc.scope - libcontainer container c8069b1f681741f192c5d12509cf5f93f317df98bfab8ec6d48b37c4b0b5a7bc. Apr 13 20:10:06.065004 containerd[1517]: time="2026-04-13T20:10:06.064442338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76fdf6b58f-sfc6f,Uid:d28dedcd-57de-44c6-aaf6-ca79c2dd6518,Namespace:calico-system,Attempt:1,} returns sandbox id \"5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d\"" Apr 13 20:10:06.101132 containerd[1517]: time="2026-04-13T20:10:06.100908054Z" level=info msg="StartContainer for \"c8069b1f681741f192c5d12509cf5f93f317df98bfab8ec6d48b37c4b0b5a7bc\" returns successfully" Apr 13 20:10:06.319261 containerd[1517]: time="2026-04-13T20:10:06.319213107Z" level=info msg="StopPodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\"" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.389 [INFO][4929] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.389 [INFO][4929] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" iface="eth0" netns="/var/run/netns/cni-b775e7ae-8ee8-6b72-072b-123cba57f819" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.390 [INFO][4929] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" iface="eth0" netns="/var/run/netns/cni-b775e7ae-8ee8-6b72-072b-123cba57f819" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.390 [INFO][4929] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" iface="eth0" netns="/var/run/netns/cni-b775e7ae-8ee8-6b72-072b-123cba57f819" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.390 [INFO][4929] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.390 [INFO][4929] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.421 [INFO][4938] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.421 [INFO][4938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.421 [INFO][4938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.427 [WARNING][4938] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.427 [INFO][4938] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.430 [INFO][4938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:06.436890 containerd[1517]: 2026-04-13 20:10:06.434 [INFO][4929] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:06.438083 containerd[1517]: time="2026-04-13T20:10:06.437801963Z" level=info msg="TearDown network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" successfully" Apr 13 20:10:06.438083 containerd[1517]: time="2026-04-13T20:10:06.437898533Z" level=info msg="StopPodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" returns successfully" Apr 13 20:10:06.440916 systemd[1]: run-netns-cni\x2db775e7ae\x2d8ee8\x2d6b72\x2d072b\x2d123cba57f819.mount: Deactivated successfully. Apr 13 20:10:06.442605 containerd[1517]: time="2026-04-13T20:10:06.442157180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ljtsp,Uid:5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3,Namespace:kube-system,Attempt:1,}" Apr 13 20:10:06.557903 kubelet[2594]: I0413 20:10:06.557850 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-rk9zl" podStartSLOduration=42.557838897 podStartE2EDuration="42.557838897s" podCreationTimestamp="2026-04-13 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:06.557579456 +0000 UTC m=+49.322169734" watchObservedRunningTime="2026-04-13 20:10:06.557838897 +0000 UTC m=+49.322429165" Apr 13 20:10:06.586472 systemd-networkd[1398]: calif3d3dc21c1b: Link UP Apr 13 20:10:06.589667 systemd-networkd[1398]: calif3d3dc21c1b: Gained carrier Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.496 [INFO][4945] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0 coredns-7d764666f9- kube-system 5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3 979 0 2026-04-13 20:09:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 coredns-7d764666f9-ljtsp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3d3dc21c1b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.496 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.520 [INFO][4957] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" HandleID="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.525 [INFO][4957] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" HandleID="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"coredns-7d764666f9-ljtsp", "timestamp":"2026-04-13 20:10:06.520269984 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.525 [INFO][4957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.525 [INFO][4957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.526 [INFO][4957] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.528 [INFO][4957] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.535 [INFO][4957] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.541 [INFO][4957] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.545 [INFO][4957] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.551 [INFO][4957] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.551 [INFO][4957] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.552 [INFO][4957] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1 Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.559 [INFO][4957] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.570 [INFO][4957] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.135/26] block=192.168.72.128/26 handle="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.571 [INFO][4957] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.135/26] handle="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.571 [INFO][4957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:06.611699 containerd[1517]: 2026-04-13 20:10:06.571 [INFO][4957] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.135/26] IPv6=[] ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" HandleID="k8s-pod-network.4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.613574 containerd[1517]: 2026-04-13 20:10:06.579 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"coredns-7d764666f9-ljtsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d3dc21c1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:06.613574 containerd[1517]: 2026-04-13 20:10:06.579 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.135/32] ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.613574 containerd[1517]: 2026-04-13 20:10:06.580 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3d3dc21c1b ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.613574 containerd[1517]: 2026-04-13 20:10:06.590 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.613574 containerd[1517]: 2026-04-13 20:10:06.591 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1", Pod:"coredns-7d764666f9-ljtsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d3dc21c1b", MAC:"7e:c7:d5:ba:c9:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:06.613715 containerd[1517]: 2026-04-13 20:10:06.604 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1" Namespace="kube-system" Pod="coredns-7d764666f9-ljtsp" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:06.669160 containerd[1517]: time="2026-04-13T20:10:06.668944586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:06.669160 containerd[1517]: time="2026-04-13T20:10:06.668996686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:06.669160 containerd[1517]: time="2026-04-13T20:10:06.669006796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:06.669875 containerd[1517]: time="2026-04-13T20:10:06.669711751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:06.706427 systemd[1]: Started cri-containerd-4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1.scope - libcontainer container 4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1. Apr 13 20:10:06.754058 containerd[1517]: time="2026-04-13T20:10:06.754028344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ljtsp,Uid:5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3,Namespace:kube-system,Attempt:1,} returns sandbox id \"4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1\"" Apr 13 20:10:06.761217 containerd[1517]: time="2026-04-13T20:10:06.761068897Z" level=info msg="CreateContainer within sandbox \"4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:10:06.775366 containerd[1517]: time="2026-04-13T20:10:06.775288625Z" level=info msg="CreateContainer within sandbox \"4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c691a20fecf7e1a9f559f4c2e2770e6d03b432b3319f19a307f86720c4f03c0\"" Apr 13 20:10:06.776277 containerd[1517]: time="2026-04-13T20:10:06.776259161Z" level=info msg="StartContainer for \"8c691a20fecf7e1a9f559f4c2e2770e6d03b432b3319f19a307f86720c4f03c0\"" Apr 13 20:10:06.806433 systemd[1]: Started cri-containerd-8c691a20fecf7e1a9f559f4c2e2770e6d03b432b3319f19a307f86720c4f03c0.scope - libcontainer container 8c691a20fecf7e1a9f559f4c2e2770e6d03b432b3319f19a307f86720c4f03c0. Apr 13 20:10:06.835435 containerd[1517]: time="2026-04-13T20:10:06.835264137Z" level=info msg="StartContainer for \"8c691a20fecf7e1a9f559f4c2e2770e6d03b432b3319f19a307f86720c4f03c0\" returns successfully" Apr 13 20:10:06.850760 containerd[1517]: time="2026-04-13T20:10:06.849878738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.851783 containerd[1517]: time="2026-04-13T20:10:06.851760150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:10:06.853292 containerd[1517]: time="2026-04-13T20:10:06.853267489Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.855656 containerd[1517]: time="2026-04-13T20:10:06.855639784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:06.857010 containerd[1517]: time="2026-04-13T20:10:06.856991912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.208929281s" Apr 13 20:10:06.857072 containerd[1517]: time="2026-04-13T20:10:06.857061943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:06.859130 containerd[1517]: time="2026-04-13T20:10:06.859113365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:10:06.861327 containerd[1517]: time="2026-04-13T20:10:06.861288929Z" level=info msg="CreateContainer within sandbox \"ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:06.885393 containerd[1517]: time="2026-04-13T20:10:06.885364448Z" level=info msg="CreateContainer within sandbox \"ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e8911b5a43a0799acd9846669690881b99b566bdb062cb51e197dd46d151e08\"" Apr 13 20:10:06.886235 containerd[1517]: time="2026-04-13T20:10:06.886142583Z" level=info msg="StartContainer for \"1e8911b5a43a0799acd9846669690881b99b566bdb062cb51e197dd46d151e08\"" Apr 13 20:10:06.910442 systemd[1]: Started cri-containerd-1e8911b5a43a0799acd9846669690881b99b566bdb062cb51e197dd46d151e08.scope - libcontainer container 1e8911b5a43a0799acd9846669690881b99b566bdb062cb51e197dd46d151e08. Apr 13 20:10:06.948855 containerd[1517]: time="2026-04-13T20:10:06.948450499Z" level=info msg="StartContainer for \"1e8911b5a43a0799acd9846669690881b99b566bdb062cb51e197dd46d151e08\" returns successfully" Apr 13 20:10:07.319729 containerd[1517]: time="2026-04-13T20:10:07.319026083Z" level=info msg="StopPodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\"" Apr 13 20:10:07.334730 systemd-networkd[1398]: calid3510870b70: Gained IPv6LL Apr 13 20:10:07.397840 systemd-networkd[1398]: califc32aa2bc96: Gained IPv6LL Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.379 [INFO][5118] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.380 [INFO][5118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" iface="eth0" netns="/var/run/netns/cni-e97001d5-7e63-72c0-0d35-1e69941ae65e" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.380 [INFO][5118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" iface="eth0" netns="/var/run/netns/cni-e97001d5-7e63-72c0-0d35-1e69941ae65e" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.380 [INFO][5118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" iface="eth0" netns="/var/run/netns/cni-e97001d5-7e63-72c0-0d35-1e69941ae65e" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.380 [INFO][5118] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.380 [INFO][5118] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.398 [INFO][5130] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.398 [INFO][5130] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.399 [INFO][5130] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.404 [WARNING][5130] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.404 [INFO][5130] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.405 [INFO][5130] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:07.410669 containerd[1517]: 2026-04-13 20:10:07.407 [INFO][5118] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:07.411402 containerd[1517]: time="2026-04-13T20:10:07.410915067Z" level=info msg="TearDown network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" successfully" Apr 13 20:10:07.411402 containerd[1517]: time="2026-04-13T20:10:07.410935568Z" level=info msg="StopPodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" returns successfully" Apr 13 20:10:07.414086 containerd[1517]: time="2026-04-13T20:10:07.413690754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lb95t,Uid:c44a049a-7045-454f-8fa8-94f080c00249,Namespace:calico-system,Attempt:1,}" Apr 13 20:10:07.424694 systemd[1]: run-netns-cni\x2de97001d5\x2d7e63\x2d72c0\x2d0d35\x2d1e69941ae65e.mount: Deactivated successfully. Apr 13 20:10:07.461482 systemd-networkd[1398]: caliaad9e7ecb18: Gained IPv6LL Apr 13 20:10:07.540951 systemd-networkd[1398]: cali88e2f06a350: Link UP Apr 13 20:10:07.542579 systemd-networkd[1398]: cali88e2f06a350: Gained carrier Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.458 [INFO][5139] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0 csi-node-driver- calico-system c44a049a-7045-454f-8fa8-94f080c00249 1000 0 2026-04-13 20:09:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-1-0f1354cb62 csi-node-driver-lb95t eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali88e2f06a350 [] [] }} ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.458 [INFO][5139] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.484 [INFO][5153] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" HandleID="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.489 [INFO][5153] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" HandleID="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-1-0f1354cb62", "pod":"csi-node-driver-lb95t", "timestamp":"2026-04-13 20:10:07.484153353 +0000 UTC"}, Hostname:"ci-4081-3-7-1-0f1354cb62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e8dc0)} Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.489 [INFO][5153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.489 [INFO][5153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.490 [INFO][5153] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-1-0f1354cb62' Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.493 [INFO][5153] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.502 [INFO][5153] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.506 [INFO][5153] ipam/ipam.go 526: Trying affinity for 192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.510 [INFO][5153] ipam/ipam.go 160: Attempting to load block cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.513 [INFO][5153] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.72.128/26 host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.513 [INFO][5153] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.72.128/26 handle="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.517 [INFO][5153] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.522 [INFO][5153] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.72.128/26 handle="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.531 [INFO][5153] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.72.136/26] block=192.168.72.128/26 handle="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.531 [INFO][5153] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.72.136/26] handle="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" host="ci-4081-3-7-1-0f1354cb62" Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.531 [INFO][5153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:07.563447 containerd[1517]: 2026-04-13 20:10:07.531 [INFO][5153] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.72.136/26] IPv6=[] ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" HandleID="k8s-pod-network.81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.564698 containerd[1517]: 2026-04-13 20:10:07.535 [INFO][5139] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c44a049a-7045-454f-8fa8-94f080c00249", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"", Pod:"csi-node-driver-lb95t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88e2f06a350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:07.564698 containerd[1517]: 2026-04-13 20:10:07.535 [INFO][5139] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.136/32] ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.564698 containerd[1517]: 2026-04-13 20:10:07.535 [INFO][5139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88e2f06a350 ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.564698 containerd[1517]: 2026-04-13 20:10:07.541 [INFO][5139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.564698 containerd[1517]: 2026-04-13 20:10:07.542 [INFO][5139] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c44a049a-7045-454f-8fa8-94f080c00249", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c", Pod:"csi-node-driver-lb95t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88e2f06a350", MAC:"fa:06:ea:88:b5:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:07.564698 containerd[1517]: 2026-04-13 20:10:07.559 [INFO][5139] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c" Namespace="calico-system" Pod="csi-node-driver-lb95t" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:07.564846 kubelet[2594]: I0413 20:10:07.563746 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ljtsp" podStartSLOduration=43.563735276 podStartE2EDuration="43.563735276s" podCreationTimestamp="2026-04-13 20:09:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:10:07.563069552 +0000 UTC m=+50.327659820" watchObservedRunningTime="2026-04-13 20:10:07.563735276 +0000 UTC m=+50.328325544" Apr 13 20:10:07.610171 containerd[1517]: time="2026-04-13T20:10:07.599622574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:10:07.610171 containerd[1517]: time="2026-04-13T20:10:07.599677485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:10:07.610171 containerd[1517]: time="2026-04-13T20:10:07.599687585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:07.610171 containerd[1517]: time="2026-04-13T20:10:07.599764635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:10:07.634564 systemd[1]: Started cri-containerd-81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c.scope - libcontainer container 81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c. Apr 13 20:10:07.676040 containerd[1517]: time="2026-04-13T20:10:07.674937102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lb95t,Uid:c44a049a-7045-454f-8fa8-94f080c00249,Namespace:calico-system,Attempt:1,} returns sandbox id \"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c\"" Apr 13 20:10:07.902253 kubelet[2594]: I0413 20:10:07.901434 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-799d497f46-qlmsr" podStartSLOduration=31.69130529 podStartE2EDuration="34.901419139s" podCreationTimestamp="2026-04-13 20:09:33 +0000 UTC" firstStartedPulling="2026-04-13 20:10:03.647716638 +0000 UTC m=+46.412306896" lastFinishedPulling="2026-04-13 20:10:06.857830477 +0000 UTC m=+49.622420745" observedRunningTime="2026-04-13 20:10:07.594304414 +0000 UTC m=+50.358894682" watchObservedRunningTime="2026-04-13 20:10:07.901419139 +0000 UTC m=+50.666009407" Apr 13 20:10:08.358244 systemd-networkd[1398]: calif3d3dc21c1b: Gained IPv6LL Apr 13 20:10:09.520920 systemd-networkd[1398]: cali88e2f06a350: Gained IPv6LL Apr 13 20:10:09.885828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1135433709.mount: Deactivated successfully. Apr 13 20:10:11.053401 containerd[1517]: time="2026-04-13T20:10:11.053358102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.054503 containerd[1517]: time="2026-04-13T20:10:11.054469598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:10:11.055381 containerd[1517]: time="2026-04-13T20:10:11.055350792Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.057267 containerd[1517]: time="2026-04-13T20:10:11.057154980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.057804 containerd[1517]: time="2026-04-13T20:10:11.057779282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.198594026s" Apr 13 20:10:11.057833 containerd[1517]: time="2026-04-13T20:10:11.057803893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:10:11.059812 containerd[1517]: time="2026-04-13T20:10:11.059690621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:10:11.061580 containerd[1517]: time="2026-04-13T20:10:11.061395119Z" level=info msg="CreateContainer within sandbox \"90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:10:11.083215 containerd[1517]: time="2026-04-13T20:10:11.083177976Z" level=info msg="CreateContainer within sandbox \"90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"51178f0d7f21bc3d55d12b5c612dbbfe6ecfb29392b9bd07efe27d8af553dda6\"" Apr 13 20:10:11.083713 containerd[1517]: time="2026-04-13T20:10:11.083696339Z" level=info msg="StartContainer for \"51178f0d7f21bc3d55d12b5c612dbbfe6ecfb29392b9bd07efe27d8af553dda6\"" Apr 13 20:10:11.110437 systemd[1]: Started cri-containerd-51178f0d7f21bc3d55d12b5c612dbbfe6ecfb29392b9bd07efe27d8af553dda6.scope - libcontainer container 51178f0d7f21bc3d55d12b5c612dbbfe6ecfb29392b9bd07efe27d8af553dda6. Apr 13 20:10:11.145278 containerd[1517]: time="2026-04-13T20:10:11.145249665Z" level=info msg="StartContainer for \"51178f0d7f21bc3d55d12b5c612dbbfe6ecfb29392b9bd07efe27d8af553dda6\" returns successfully" Apr 13 20:10:11.564955 containerd[1517]: time="2026-04-13T20:10:11.564648919Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:11.567048 containerd[1517]: time="2026-04-13T20:10:11.566133966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:10:11.574840 containerd[1517]: time="2026-04-13T20:10:11.574766295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 515.043974ms" Apr 13 20:10:11.574840 containerd[1517]: time="2026-04-13T20:10:11.574832125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:10:11.591920 kubelet[2594]: I0413 20:10:11.591817 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-t47rf" podStartSLOduration=31.164410427 podStartE2EDuration="37.591805431s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:10:04.631156712 +0000 UTC m=+47.395746970" lastFinishedPulling="2026-04-13 20:10:11.058551706 +0000 UTC m=+53.823141974" observedRunningTime="2026-04-13 20:10:11.590483095 +0000 UTC m=+54.355073453" watchObservedRunningTime="2026-04-13 20:10:11.591805431 +0000 UTC m=+54.356395699" Apr 13 20:10:11.598781 containerd[1517]: time="2026-04-13T20:10:11.598501281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:10:11.606914 containerd[1517]: time="2026-04-13T20:10:11.606832209Z" level=info msg="CreateContainer within sandbox \"8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:10:11.634875 containerd[1517]: time="2026-04-13T20:10:11.634842384Z" level=info msg="CreateContainer within sandbox \"8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"808880a0f838a93f87d54bf0301fddbcc8b077300d71399feb14ea82e2c231d3\"" Apr 13 20:10:11.635823 containerd[1517]: time="2026-04-13T20:10:11.635757518Z" level=info msg="StartContainer for \"808880a0f838a93f87d54bf0301fddbcc8b077300d71399feb14ea82e2c231d3\"" Apr 13 20:10:11.671442 systemd[1]: Started cri-containerd-808880a0f838a93f87d54bf0301fddbcc8b077300d71399feb14ea82e2c231d3.scope - libcontainer container 808880a0f838a93f87d54bf0301fddbcc8b077300d71399feb14ea82e2c231d3. Apr 13 20:10:11.708193 containerd[1517]: time="2026-04-13T20:10:11.708148544Z" level=info msg="StartContainer for \"808880a0f838a93f87d54bf0301fddbcc8b077300d71399feb14ea82e2c231d3\" returns successfully" Apr 13 20:10:12.660044 kubelet[2594]: I0413 20:10:12.659981 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-799d497f46-xzwkq" podStartSLOduration=33.982176521 podStartE2EDuration="39.659970013s" podCreationTimestamp="2026-04-13 20:09:33 +0000 UTC" firstStartedPulling="2026-04-13 20:10:05.918827421 +0000 UTC m=+48.683417689" lastFinishedPulling="2026-04-13 20:10:11.596620923 +0000 UTC m=+54.361211181" observedRunningTime="2026-04-13 20:10:12.603218094 +0000 UTC m=+55.367808382" watchObservedRunningTime="2026-04-13 20:10:12.659970013 +0000 UTC m=+55.424560281" Apr 13 20:10:15.949895 containerd[1517]: time="2026-04-13T20:10:15.949802707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.951014 containerd[1517]: time="2026-04-13T20:10:15.950867051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:10:15.952134 containerd[1517]: time="2026-04-13T20:10:15.951944074Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.954002 containerd[1517]: time="2026-04-13T20:10:15.953974281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:15.954576 containerd[1517]: time="2026-04-13T20:10:15.954549663Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.355997632s" Apr 13 20:10:15.954612 containerd[1517]: time="2026-04-13T20:10:15.954579794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:10:15.955861 containerd[1517]: time="2026-04-13T20:10:15.955554267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:10:15.974542 containerd[1517]: time="2026-04-13T20:10:15.974506353Z" level=info msg="CreateContainer within sandbox \"5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:10:15.988284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419938412.mount: Deactivated successfully. Apr 13 20:10:15.991079 containerd[1517]: time="2026-04-13T20:10:15.991050080Z" level=info msg="CreateContainer within sandbox \"5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ddd5a2826272ce0a152e4bc092831e9bef639ebf02399160a52d39ae9861ff64\"" Apr 13 20:10:15.992609 containerd[1517]: time="2026-04-13T20:10:15.991698402Z" level=info msg="StartContainer for \"ddd5a2826272ce0a152e4bc092831e9bef639ebf02399160a52d39ae9861ff64\"" Apr 13 20:10:16.021544 systemd[1]: Started cri-containerd-ddd5a2826272ce0a152e4bc092831e9bef639ebf02399160a52d39ae9861ff64.scope - libcontainer container ddd5a2826272ce0a152e4bc092831e9bef639ebf02399160a52d39ae9861ff64. Apr 13 20:10:16.070402 containerd[1517]: time="2026-04-13T20:10:16.070363570Z" level=info msg="StartContainer for \"ddd5a2826272ce0a152e4bc092831e9bef639ebf02399160a52d39ae9861ff64\" returns successfully" Apr 13 20:10:16.611946 kubelet[2594]: I0413 20:10:16.611596 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76fdf6b58f-sfc6f" podStartSLOduration=32.723089407 podStartE2EDuration="42.61152685s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:10:06.066974453 +0000 UTC m=+48.831564721" lastFinishedPulling="2026-04-13 20:10:15.955411896 +0000 UTC m=+58.720002164" observedRunningTime="2026-04-13 20:10:16.608805681 +0000 UTC m=+59.373395999" watchObservedRunningTime="2026-04-13 20:10:16.61152685 +0000 UTC m=+59.376117188" Apr 13 20:10:17.302444 containerd[1517]: time="2026-04-13T20:10:17.302370605Z" level=info msg="StopPodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\"" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.386 [WARNING][5466] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"c2f871d5-f5b7-4d50-817f-a1527db6a36c", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3", Pod:"calico-apiserver-799d497f46-qlmsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6c587cb79c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.387 [INFO][5466] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.387 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" iface="eth0" netns="" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.387 [INFO][5466] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.387 [INFO][5466] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.404 [INFO][5475] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.404 [INFO][5475] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.404 [INFO][5475] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.409 [WARNING][5475] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.409 [INFO][5475] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.411 [INFO][5475] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.416131 containerd[1517]: 2026-04-13 20:10:17.413 [INFO][5466] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.416568 containerd[1517]: time="2026-04-13T20:10:17.416538893Z" level=info msg="TearDown network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" successfully" Apr 13 20:10:17.416568 containerd[1517]: time="2026-04-13T20:10:17.416564623Z" level=info msg="StopPodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" returns successfully" Apr 13 20:10:17.417053 containerd[1517]: time="2026-04-13T20:10:17.417023924Z" level=info msg="RemovePodSandbox for \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\"" Apr 13 20:10:17.417053 containerd[1517]: time="2026-04-13T20:10:17.417044704Z" level=info msg="Forcibly stopping sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\"" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.441 [WARNING][5490] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"c2f871d5-f5b7-4d50-817f-a1527db6a36c", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"ba415b7b3352123e24519610700df9b1692b4031e89db98f329f36887881a9c3", Pod:"calico-apiserver-799d497f46-qlmsr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6c587cb79c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.441 [INFO][5490] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.441 [INFO][5490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" iface="eth0" netns="" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.441 [INFO][5490] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.441 [INFO][5490] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.456 [INFO][5497] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.456 [INFO][5497] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.456 [INFO][5497] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.461 [WARNING][5497] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.462 [INFO][5497] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" HandleID="k8s-pod-network.92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--qlmsr-eth0" Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.463 [INFO][5497] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.467552 containerd[1517]: 2026-04-13 20:10:17.465 [INFO][5490] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5" Apr 13 20:10:17.467949 containerd[1517]: time="2026-04-13T20:10:17.467604538Z" level=info msg="TearDown network for sandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" successfully" Apr 13 20:10:17.471277 containerd[1517]: time="2026-04-13T20:10:17.471250000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:17.471411 containerd[1517]: time="2026-04-13T20:10:17.471299100Z" level=info msg="RemovePodSandbox \"92777dc88dab0c9fb20a399bb38ee737f845cc9bb5e604b19fae497007448cb5\" returns successfully" Apr 13 20:10:17.471836 containerd[1517]: time="2026-04-13T20:10:17.471811011Z" level=info msg="StopPodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\"" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.499 [WARNING][5511] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ac22c161-aa7d-4a94-a2de-6ea0122df0fc", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a", Pod:"goldmane-9f7667bb8-t47rf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aed23e021b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.499 [INFO][5511] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.499 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" iface="eth0" netns="" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.499 [INFO][5511] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.499 [INFO][5511] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.513 [INFO][5518] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.513 [INFO][5518] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.513 [INFO][5518] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.518 [WARNING][5518] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.518 [INFO][5518] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.519 [INFO][5518] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.523148 containerd[1517]: 2026-04-13 20:10:17.521 [INFO][5511] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.523910 containerd[1517]: time="2026-04-13T20:10:17.523186148Z" level=info msg="TearDown network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" successfully" Apr 13 20:10:17.523910 containerd[1517]: time="2026-04-13T20:10:17.523209398Z" level=info msg="StopPodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" returns successfully" Apr 13 20:10:17.523910 containerd[1517]: time="2026-04-13T20:10:17.523605139Z" level=info msg="RemovePodSandbox for \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\"" Apr 13 20:10:17.523910 containerd[1517]: time="2026-04-13T20:10:17.523627009Z" level=info msg="Forcibly stopping sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\"" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.549 [WARNING][5532] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"ac22c161-aa7d-4a94-a2de-6ea0122df0fc", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"90362b52b32e6604eabb3e294d3a712d0867e3138fbc9166516e2fcd9d87297a", Pod:"goldmane-9f7667bb8-t47rf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aed23e021b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.549 [INFO][5532] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.549 [INFO][5532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" iface="eth0" netns="" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.549 [INFO][5532] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.549 [INFO][5532] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.575 [INFO][5539] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.575 [INFO][5539] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.575 [INFO][5539] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.582 [WARNING][5539] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.582 [INFO][5539] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" HandleID="k8s-pod-network.dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-goldmane--9f7667bb8--t47rf-eth0" Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.584 [INFO][5539] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.588762 containerd[1517]: 2026-04-13 20:10:17.586 [INFO][5532] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e" Apr 13 20:10:17.589942 containerd[1517]: time="2026-04-13T20:10:17.589157049Z" level=info msg="TearDown network for sandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" successfully" Apr 13 20:10:17.595943 containerd[1517]: time="2026-04-13T20:10:17.595917820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:17.596055 containerd[1517]: time="2026-04-13T20:10:17.595968230Z" level=info msg="RemovePodSandbox \"dbb8c898115f34cf25c8a10209740254f6791bd6e48d6b404b7928992557ff0e\" returns successfully" Apr 13 20:10:17.596171 containerd[1517]: time="2026-04-13T20:10:17.596143290Z" level=info msg="StopPodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\"" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.628 [WARNING][5555] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.628 [INFO][5555] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.628 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" iface="eth0" netns="" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.628 [INFO][5555] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.628 [INFO][5555] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.658 [INFO][5564] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.658 [INFO][5564] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.658 [INFO][5564] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.665 [WARNING][5564] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.665 [INFO][5564] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.666 [INFO][5564] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.672717 containerd[1517]: 2026-04-13 20:10:17.670 [INFO][5555] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.673262 containerd[1517]: time="2026-04-13T20:10:17.672735984Z" level=info msg="TearDown network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" successfully" Apr 13 20:10:17.673262 containerd[1517]: time="2026-04-13T20:10:17.672754754Z" level=info msg="StopPodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" returns successfully" Apr 13 20:10:17.673652 containerd[1517]: time="2026-04-13T20:10:17.673413426Z" level=info msg="RemovePodSandbox for \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\"" Apr 13 20:10:17.673652 containerd[1517]: time="2026-04-13T20:10:17.673434726Z" level=info msg="Forcibly stopping sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\"" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.711 [WARNING][5581] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" WorkloadEndpoint="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.711 [INFO][5581] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.711 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" iface="eth0" netns="" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.711 [INFO][5581] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.711 [INFO][5581] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.748 [INFO][5588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.749 [INFO][5588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.749 [INFO][5588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.755 [WARNING][5588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.755 [INFO][5588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" HandleID="k8s-pod-network.91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Workload="ci--4081--3--7--1--0f1354cb62-k8s-whisker--68455c66c8--cn7ch-eth0" Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.757 [INFO][5588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.762510 containerd[1517]: 2026-04-13 20:10:17.760 [INFO][5581] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc" Apr 13 20:10:17.762889 containerd[1517]: time="2026-04-13T20:10:17.762539688Z" level=info msg="TearDown network for sandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" successfully" Apr 13 20:10:17.770261 containerd[1517]: time="2026-04-13T20:10:17.770114121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:17.770261 containerd[1517]: time="2026-04-13T20:10:17.770204701Z" level=info msg="RemovePodSandbox \"91a72f091071bf65754f98c537b3a3b370b5e8f692f820b612e33e9ac8fd58cc\" returns successfully" Apr 13 20:10:17.770766 containerd[1517]: time="2026-04-13T20:10:17.770747793Z" level=info msg="StopPodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\"" Apr 13 20:10:17.772247 containerd[1517]: time="2026-04-13T20:10:17.772017717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:17.773858 containerd[1517]: time="2026-04-13T20:10:17.773834752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:10:17.774961 containerd[1517]: time="2026-04-13T20:10:17.774945536Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:17.776751 containerd[1517]: time="2026-04-13T20:10:17.776735201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:17.777966 containerd[1517]: time="2026-04-13T20:10:17.777950865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.822374008s" Apr 13 20:10:17.778037 containerd[1517]: time="2026-04-13T20:10:17.778026855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:10:17.783702 containerd[1517]: time="2026-04-13T20:10:17.783672232Z" level=info msg="CreateContainer within sandbox \"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:10:17.805614 containerd[1517]: time="2026-04-13T20:10:17.804360475Z" level=info msg="CreateContainer within sandbox \"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f0e0614bddc00b322dc6d7358c08eb67add7a3f7f8c0c0854d420a956ee9a9da\"" Apr 13 20:10:17.805614 containerd[1517]: time="2026-04-13T20:10:17.804935437Z" level=info msg="StartContainer for \"f0e0614bddc00b322dc6d7358c08eb67add7a3f7f8c0c0854d420a956ee9a9da\"" Apr 13 20:10:17.843441 systemd[1]: Started cri-containerd-f0e0614bddc00b322dc6d7358c08eb67add7a3f7f8c0c0854d420a956ee9a9da.scope - libcontainer container f0e0614bddc00b322dc6d7358c08eb67add7a3f7f8c0c0854d420a956ee9a9da. Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.808 [WARNING][5609] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f7c1fad-6f07-437d-826f-867809010e65", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312", Pod:"coredns-7d764666f9-rk9zl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad9e7ecb18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.809 [INFO][5609] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.809 [INFO][5609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" iface="eth0" netns="" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.809 [INFO][5609] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.809 [INFO][5609] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.835 [INFO][5620] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.835 [INFO][5620] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.835 [INFO][5620] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.840 [WARNING][5620] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.841 [INFO][5620] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.842 [INFO][5620] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.847755 containerd[1517]: 2026-04-13 20:10:17.845 [INFO][5609] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.848086 containerd[1517]: time="2026-04-13T20:10:17.847784788Z" level=info msg="TearDown network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" successfully" Apr 13 20:10:17.848086 containerd[1517]: time="2026-04-13T20:10:17.847804608Z" level=info msg="StopPodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" returns successfully" Apr 13 20:10:17.848360 containerd[1517]: time="2026-04-13T20:10:17.848307069Z" level=info msg="RemovePodSandbox for \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\"" Apr 13 20:10:17.848395 containerd[1517]: time="2026-04-13T20:10:17.848368120Z" level=info msg="Forcibly stopping sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\"" Apr 13 20:10:17.877124 containerd[1517]: time="2026-04-13T20:10:17.877084237Z" level=info msg="StartContainer for \"f0e0614bddc00b322dc6d7358c08eb67add7a3f7f8c0c0854d420a956ee9a9da\" returns successfully" Apr 13 20:10:17.878732 containerd[1517]: time="2026-04-13T20:10:17.878673472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.876 [WARNING][5654] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"7f7c1fad-6f07-437d-826f-867809010e65", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"c89ea94b9a7b9f6c8cf548d245b596f44961dd340a2d5f1b68eb80533c65d312", Pod:"coredns-7d764666f9-rk9zl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaad9e7ecb18", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.876 [INFO][5654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.876 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" iface="eth0" netns="" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.876 [INFO][5654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.876 [INFO][5654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.897 [INFO][5671] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.897 [INFO][5671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.897 [INFO][5671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.902 [WARNING][5671] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.902 [INFO][5671] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" HandleID="k8s-pod-network.64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--rk9zl-eth0" Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.903 [INFO][5671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.906849 containerd[1517]: 2026-04-13 20:10:17.905 [INFO][5654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c" Apr 13 20:10:17.907179 containerd[1517]: time="2026-04-13T20:10:17.906905228Z" level=info msg="TearDown network for sandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" successfully" Apr 13 20:10:17.910608 containerd[1517]: time="2026-04-13T20:10:17.910583199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:17.910662 containerd[1517]: time="2026-04-13T20:10:17.910646219Z" level=info msg="RemovePodSandbox \"64b831e8dc516a9a166fdf05261c8e29316fd61df194e1f8fd5b4a57c956531c\" returns successfully" Apr 13 20:10:17.911192 containerd[1517]: time="2026-04-13T20:10:17.911058151Z" level=info msg="StopPodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\"" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.940 [WARNING][5685] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c44a049a-7045-454f-8fa8-94f080c00249", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c", Pod:"csi-node-driver-lb95t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88e2f06a350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.940 [INFO][5685] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.940 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" iface="eth0" netns="" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.940 [INFO][5685] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.940 [INFO][5685] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.955 [INFO][5692] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.956 [INFO][5692] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.956 [INFO][5692] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.960 [WARNING][5692] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.960 [INFO][5692] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.962 [INFO][5692] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:17.966027 containerd[1517]: 2026-04-13 20:10:17.964 [INFO][5685] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:17.966390 containerd[1517]: time="2026-04-13T20:10:17.966062329Z" level=info msg="TearDown network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" successfully" Apr 13 20:10:17.966390 containerd[1517]: time="2026-04-13T20:10:17.966083209Z" level=info msg="StopPodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" returns successfully" Apr 13 20:10:17.966555 containerd[1517]: time="2026-04-13T20:10:17.966532890Z" level=info msg="RemovePodSandbox for \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\"" Apr 13 20:10:17.966577 containerd[1517]: time="2026-04-13T20:10:17.966554130Z" level=info msg="Forcibly stopping sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\"" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:17.991 [WARNING][5706] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c44a049a-7045-454f-8fa8-94f080c00249", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c", Pod:"csi-node-driver-lb95t", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali88e2f06a350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:17.991 [INFO][5706] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:17.991 [INFO][5706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" iface="eth0" netns="" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:17.992 [INFO][5706] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:17.992 [INFO][5706] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.007 [INFO][5713] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.007 [INFO][5713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.007 [INFO][5713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.011 [WARNING][5713] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.011 [INFO][5713] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" HandleID="k8s-pod-network.032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Workload="ci--4081--3--7--1--0f1354cb62-k8s-csi--node--driver--lb95t-eth0" Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.013 [INFO][5713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.017007 containerd[1517]: 2026-04-13 20:10:18.015 [INFO][5706] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193" Apr 13 20:10:18.017385 containerd[1517]: time="2026-04-13T20:10:18.017030101Z" level=info msg="TearDown network for sandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" successfully" Apr 13 20:10:18.020949 containerd[1517]: time="2026-04-13T20:10:18.020918302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:18.020997 containerd[1517]: time="2026-04-13T20:10:18.020958802Z" level=info msg="RemovePodSandbox \"032ead87b8bd2c9e1f7b798ab4d4521bc788a8543319e9112c300119c1c02193\" returns successfully" Apr 13 20:10:18.021592 containerd[1517]: time="2026-04-13T20:10:18.021360913Z" level=info msg="StopPodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\"" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.048 [WARNING][5727] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"f935bdd7-a5d9-4d40-8315-4896038786b4", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd", Pod:"calico-apiserver-799d497f46-xzwkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid3510870b70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.049 [INFO][5727] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.049 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" iface="eth0" netns="" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.049 [INFO][5727] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.049 [INFO][5727] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.064 [INFO][5734] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.064 [INFO][5734] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.064 [INFO][5734] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.069 [WARNING][5734] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.069 [INFO][5734] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.070 [INFO][5734] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.074218 containerd[1517]: 2026-04-13 20:10:18.072 [INFO][5727] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.074806 containerd[1517]: time="2026-04-13T20:10:18.074237934Z" level=info msg="TearDown network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" successfully" Apr 13 20:10:18.074806 containerd[1517]: time="2026-04-13T20:10:18.074257964Z" level=info msg="StopPodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" returns successfully" Apr 13 20:10:18.074806 containerd[1517]: time="2026-04-13T20:10:18.074697376Z" level=info msg="RemovePodSandbox for \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\"" Apr 13 20:10:18.074806 containerd[1517]: time="2026-04-13T20:10:18.074715556Z" level=info msg="Forcibly stopping sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\"" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.099 [WARNING][5748] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0", GenerateName:"calico-apiserver-799d497f46-", Namespace:"calico-system", SelfLink:"", UID:"f935bdd7-a5d9-4d40-8315-4896038786b4", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799d497f46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"8b551e03ca693f106ab46ca8dd06ad03a1cc33c25944a70da84bc69f9990c9cd", Pod:"calico-apiserver-799d497f46-xzwkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid3510870b70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.099 [INFO][5748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.099 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" iface="eth0" netns="" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.099 [INFO][5748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.099 [INFO][5748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.116 [INFO][5755] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.116 [INFO][5755] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.116 [INFO][5755] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.122 [WARNING][5755] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.122 [INFO][5755] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" HandleID="k8s-pod-network.9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--apiserver--799d497f46--xzwkq-eth0" Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.124 [INFO][5755] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.128655 containerd[1517]: 2026-04-13 20:10:18.126 [INFO][5748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8" Apr 13 20:10:18.128655 containerd[1517]: time="2026-04-13T20:10:18.128634390Z" level=info msg="TearDown network for sandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" successfully" Apr 13 20:10:18.134172 containerd[1517]: time="2026-04-13T20:10:18.134140596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:18.134299 containerd[1517]: time="2026-04-13T20:10:18.134220546Z" level=info msg="RemovePodSandbox \"9d594bf99f94defd6a462950d88096a8fb0aea71dde213ea2ffb7f8fc432ebd8\" returns successfully" Apr 13 20:10:18.134660 containerd[1517]: time="2026-04-13T20:10:18.134632337Z" level=info msg="StopPodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\"" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.163 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1", Pod:"coredns-7d764666f9-ljtsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d3dc21c1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.163 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.163 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" iface="eth0" netns="" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.163 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.163 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.185 [INFO][5776] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.185 [INFO][5776] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.185 [INFO][5776] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.190 [WARNING][5776] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.190 [INFO][5776] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.191 [INFO][5776] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.195249 containerd[1517]: 2026-04-13 20:10:18.193 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.195687 containerd[1517]: time="2026-04-13T20:10:18.195275550Z" level=info msg="TearDown network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" successfully" Apr 13 20:10:18.195687 containerd[1517]: time="2026-04-13T20:10:18.195297000Z" level=info msg="StopPodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" returns successfully" Apr 13 20:10:18.195884 containerd[1517]: time="2026-04-13T20:10:18.195855012Z" level=info msg="RemovePodSandbox for \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\"" Apr 13 20:10:18.195908 containerd[1517]: time="2026-04-13T20:10:18.195896292Z" level=info msg="Forcibly stopping sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\"" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.220 [WARNING][5790] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"5854f860-cb7b-4fb3-a3bb-bd6d5b5ffbe3", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"4a84856749b65d3f72d0f037f73c1f99639e5a6ae3ecb4f45a5ca2db10401fe1", Pod:"coredns-7d764666f9-ljtsp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3d3dc21c1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.220 [INFO][5790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.220 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" iface="eth0" netns="" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.220 [INFO][5790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.220 [INFO][5790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.235 [INFO][5798] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.236 [INFO][5798] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.236 [INFO][5798] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.240 [WARNING][5798] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.240 [INFO][5798] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" HandleID="k8s-pod-network.cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Workload="ci--4081--3--7--1--0f1354cb62-k8s-coredns--7d764666f9--ljtsp-eth0" Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.241 [INFO][5798] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.245372 containerd[1517]: 2026-04-13 20:10:18.243 [INFO][5790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db" Apr 13 20:10:18.245701 containerd[1517]: time="2026-04-13T20:10:18.245405244Z" level=info msg="TearDown network for sandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" successfully" Apr 13 20:10:18.249233 containerd[1517]: time="2026-04-13T20:10:18.249205234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:18.249307 containerd[1517]: time="2026-04-13T20:10:18.249251545Z" level=info msg="RemovePodSandbox \"cf42cbfc45c98e7cfae880a0ba4d59d9f5c991d325ea1497380975ede4a421db\" returns successfully" Apr 13 20:10:18.249708 containerd[1517]: time="2026-04-13T20:10:18.249688556Z" level=info msg="StopPodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\"" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.273 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0", GenerateName:"calico-kube-controllers-76fdf6b58f-", Namespace:"calico-system", SelfLink:"", UID:"d28dedcd-57de-44c6-aaf6-ca79c2dd6518", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fdf6b58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d", Pod:"calico-kube-controllers-76fdf6b58f-sfc6f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc32aa2bc96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.273 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.273 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" iface="eth0" netns="" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.273 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.273 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.287 [INFO][5820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.288 [INFO][5820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.288 [INFO][5820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.293 [WARNING][5820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.293 [INFO][5820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.294 [INFO][5820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.298486 containerd[1517]: 2026-04-13 20:10:18.296 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.298903 containerd[1517]: time="2026-04-13T20:10:18.298504135Z" level=info msg="TearDown network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" successfully" Apr 13 20:10:18.298903 containerd[1517]: time="2026-04-13T20:10:18.298524325Z" level=info msg="StopPodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" returns successfully" Apr 13 20:10:18.299469 containerd[1517]: time="2026-04-13T20:10:18.299188777Z" level=info msg="RemovePodSandbox for \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\"" Apr 13 20:10:18.299469 containerd[1517]: time="2026-04-13T20:10:18.299332538Z" level=info msg="Forcibly stopping sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\"" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.323 [WARNING][5834] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0", GenerateName:"calico-kube-controllers-76fdf6b58f-", Namespace:"calico-system", SelfLink:"", UID:"d28dedcd-57de-44c6-aaf6-ca79c2dd6518", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 9, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76fdf6b58f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-1-0f1354cb62", ContainerID:"5207c9beffb9f3df759764aa65780f25ca4b5e22fe9bd983cd177bad1f42dd6d", Pod:"calico-kube-controllers-76fdf6b58f-sfc6f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc32aa2bc96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.323 [INFO][5834] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.323 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" iface="eth0" netns="" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.323 [INFO][5834] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.323 [INFO][5834] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.339 [INFO][5842] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.339 [INFO][5842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.339 [INFO][5842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.343 [WARNING][5842] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.343 [INFO][5842] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" HandleID="k8s-pod-network.23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Workload="ci--4081--3--7--1--0f1354cb62-k8s-calico--kube--controllers--76fdf6b58f--sfc6f-eth0" Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.344 [INFO][5842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:10:18.348489 containerd[1517]: 2026-04-13 20:10:18.346 [INFO][5834] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e" Apr 13 20:10:18.349048 containerd[1517]: time="2026-04-13T20:10:18.348479418Z" level=info msg="TearDown network for sandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" successfully" Apr 13 20:10:18.352164 containerd[1517]: time="2026-04-13T20:10:18.352127489Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:10:18.352217 containerd[1517]: time="2026-04-13T20:10:18.352176279Z" level=info msg="RemovePodSandbox \"23938bc876a61cb2ef5015c0357abfcab5b794c8436aaa9419074767a39aff3e\" returns successfully" Apr 13 20:10:19.697109 containerd[1517]: time="2026-04-13T20:10:19.697058128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:19.698217 containerd[1517]: time="2026-04-13T20:10:19.698124921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:10:19.699639 containerd[1517]: time="2026-04-13T20:10:19.699066233Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:19.700950 containerd[1517]: time="2026-04-13T20:10:19.700920598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:10:19.701406 containerd[1517]: time="2026-04-13T20:10:19.701382640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.822664328s" Apr 13 20:10:19.701439 containerd[1517]: time="2026-04-13T20:10:19.701409410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:10:19.709411 containerd[1517]: time="2026-04-13T20:10:19.709308511Z" level=info msg="CreateContainer within sandbox \"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:10:19.723730 containerd[1517]: time="2026-04-13T20:10:19.723692909Z" level=info msg="CreateContainer within sandbox \"81f21a6d3536acbafed3c393af76c2b28c694ae60805f90384a98068b739b44c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"24a9d7a04e42cd5b471b8d2f2ca34f7b78650f612232006c4822eaa3b9271675\"" Apr 13 20:10:19.724397 containerd[1517]: time="2026-04-13T20:10:19.724135311Z" level=info msg="StartContainer for \"24a9d7a04e42cd5b471b8d2f2ca34f7b78650f612232006c4822eaa3b9271675\"" Apr 13 20:10:19.754263 systemd[1]: run-containerd-runc-k8s.io-24a9d7a04e42cd5b471b8d2f2ca34f7b78650f612232006c4822eaa3b9271675-runc.IyJTcD.mount: Deactivated successfully. Apr 13 20:10:19.766513 systemd[1]: Started cri-containerd-24a9d7a04e42cd5b471b8d2f2ca34f7b78650f612232006c4822eaa3b9271675.scope - libcontainer container 24a9d7a04e42cd5b471b8d2f2ca34f7b78650f612232006c4822eaa3b9271675. Apr 13 20:10:19.792933 containerd[1517]: time="2026-04-13T20:10:19.792874135Z" level=info msg="StartContainer for \"24a9d7a04e42cd5b471b8d2f2ca34f7b78650f612232006c4822eaa3b9271675\" returns successfully" Apr 13 20:10:20.407411 kubelet[2594]: I0413 20:10:20.407308 2594 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:10:20.408470 kubelet[2594]: I0413 20:10:20.407440 2594 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:10:20.632240 kubelet[2594]: I0413 20:10:20.632127 2594 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-lb95t" podStartSLOduration=34.60664829 podStartE2EDuration="46.632110938s" podCreationTimestamp="2026-04-13 20:09:34 +0000 UTC" firstStartedPulling="2026-04-13 20:10:07.676914234 +0000 UTC m=+50.441504502" lastFinishedPulling="2026-04-13 20:10:19.702376892 +0000 UTC m=+62.466967150" observedRunningTime="2026-04-13 20:10:20.630465464 +0000 UTC m=+63.395055772" watchObservedRunningTime="2026-04-13 20:10:20.632110938 +0000 UTC m=+63.396701236" Apr 13 20:10:42.470947 systemd[1]: Started sshd@9-62.238.1.80:22-20.229.252.112:39604.service - OpenSSH per-connection server daemon (20.229.252.112:39604). Apr 13 20:10:42.715133 sshd[5950]: Accepted publickey for core from 20.229.252.112 port 39604 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:10:42.716745 sshd[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:10:42.721254 systemd-logind[1494]: New session 10 of user core. Apr 13 20:10:42.730714 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:10:42.993891 sshd[5950]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:42.998019 systemd[1]: sshd@9-62.238.1.80:22-20.229.252.112:39604.service: Deactivated successfully. Apr 13 20:10:43.002394 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:10:43.006472 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:10:43.008517 systemd-logind[1494]: Removed session 10. Apr 13 20:10:48.048992 systemd[1]: Started sshd@10-62.238.1.80:22-20.229.252.112:57996.service - OpenSSH per-connection server daemon (20.229.252.112:57996). Apr 13 20:10:48.278677 sshd[6011]: Accepted publickey for core from 20.229.252.112 port 57996 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:10:48.281911 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:10:48.291078 systemd-logind[1494]: New session 11 of user core. Apr 13 20:10:48.297585 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:10:48.524731 sshd[6011]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:48.529237 systemd[1]: sshd@10-62.238.1.80:22-20.229.252.112:57996.service: Deactivated successfully. Apr 13 20:10:48.532739 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:10:48.534288 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:10:48.536132 systemd-logind[1494]: Removed session 11. Apr 13 20:10:53.564813 systemd[1]: Started sshd@11-62.238.1.80:22-20.229.252.112:58000.service - OpenSSH per-connection server daemon (20.229.252.112:58000). Apr 13 20:10:53.765558 sshd[6047]: Accepted publickey for core from 20.229.252.112 port 58000 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:10:53.768229 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:10:53.773797 systemd-logind[1494]: New session 12 of user core. Apr 13 20:10:53.777577 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:10:54.015822 sshd[6047]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:54.021679 systemd[1]: sshd@11-62.238.1.80:22-20.229.252.112:58000.service: Deactivated successfully. Apr 13 20:10:54.026251 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:10:54.029407 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:10:54.031204 systemd-logind[1494]: Removed session 12. Apr 13 20:10:54.513690 systemd[1]: run-containerd-runc-k8s.io-6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5-runc.i6cxxN.mount: Deactivated successfully. Apr 13 20:10:59.066046 systemd[1]: Started sshd@12-62.238.1.80:22-20.229.252.112:37668.service - OpenSSH per-connection server daemon (20.229.252.112:37668). Apr 13 20:10:59.291665 sshd[6085]: Accepted publickey for core from 20.229.252.112 port 37668 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:10:59.294827 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:10:59.303122 systemd-logind[1494]: New session 13 of user core. Apr 13 20:10:59.311582 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:10:59.545130 sshd[6085]: pam_unix(sshd:session): session closed for user core Apr 13 20:10:59.548249 systemd[1]: sshd@12-62.238.1.80:22-20.229.252.112:37668.service: Deactivated successfully. Apr 13 20:10:59.550655 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:10:59.552073 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:10:59.553947 systemd-logind[1494]: Removed session 13. Apr 13 20:11:04.590496 systemd[1]: Started sshd@13-62.238.1.80:22-20.229.252.112:37674.service - OpenSSH per-connection server daemon (20.229.252.112:37674). Apr 13 20:11:04.810375 sshd[6120]: Accepted publickey for core from 20.229.252.112 port 37674 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:04.812745 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:04.820574 systemd-logind[1494]: New session 14 of user core. Apr 13 20:11:04.835629 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:11:05.086211 sshd[6120]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:05.092892 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:11:05.093899 systemd[1]: sshd@13-62.238.1.80:22-20.229.252.112:37674.service: Deactivated successfully. Apr 13 20:11:05.098570 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:11:05.101016 systemd-logind[1494]: Removed session 14. Apr 13 20:11:05.131837 systemd[1]: Started sshd@14-62.238.1.80:22-20.229.252.112:50482.service - OpenSSH per-connection server daemon (20.229.252.112:50482). Apr 13 20:11:05.354183 sshd[6134]: Accepted publickey for core from 20.229.252.112 port 50482 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:05.357790 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:05.366042 systemd-logind[1494]: New session 15 of user core. Apr 13 20:11:05.371711 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:11:05.641339 sshd[6134]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:05.644633 systemd[1]: sshd@14-62.238.1.80:22-20.229.252.112:50482.service: Deactivated successfully. Apr 13 20:11:05.646908 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:11:05.647884 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:11:05.649021 systemd-logind[1494]: Removed session 15. Apr 13 20:11:05.686583 systemd[1]: Started sshd@15-62.238.1.80:22-20.229.252.112:50498.service - OpenSSH per-connection server daemon (20.229.252.112:50498). Apr 13 20:11:05.901086 sshd[6145]: Accepted publickey for core from 20.229.252.112 port 50498 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:05.904112 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:05.912792 systemd-logind[1494]: New session 16 of user core. Apr 13 20:11:05.920554 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:11:06.145849 sshd[6145]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:06.149896 systemd[1]: sshd@15-62.238.1.80:22-20.229.252.112:50498.service: Deactivated successfully. Apr 13 20:11:06.152165 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:11:06.153025 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:11:06.154884 systemd-logind[1494]: Removed session 16. Apr 13 20:11:11.195030 systemd[1]: Started sshd@16-62.238.1.80:22-20.229.252.112:50502.service - OpenSSH per-connection server daemon (20.229.252.112:50502). Apr 13 20:11:11.409202 sshd[6158]: Accepted publickey for core from 20.229.252.112 port 50502 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:11.412465 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:11.421244 systemd-logind[1494]: New session 17 of user core. Apr 13 20:11:11.429492 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:11:11.640394 sshd[6158]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:11.644384 systemd[1]: sshd@16-62.238.1.80:22-20.229.252.112:50502.service: Deactivated successfully. Apr 13 20:11:11.646422 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:11:11.648384 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:11:11.650037 systemd-logind[1494]: Removed session 17. Apr 13 20:11:11.686257 systemd[1]: Started sshd@17-62.238.1.80:22-20.229.252.112:50516.service - OpenSSH per-connection server daemon (20.229.252.112:50516). Apr 13 20:11:11.884980 sshd[6171]: Accepted publickey for core from 20.229.252.112 port 50516 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:11.888041 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:11.897189 systemd-logind[1494]: New session 18 of user core. Apr 13 20:11:11.906572 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:11:12.313367 sshd[6171]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:12.320505 systemd[1]: sshd@17-62.238.1.80:22-20.229.252.112:50516.service: Deactivated successfully. Apr 13 20:11:12.324214 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:11:12.325654 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:11:12.328679 systemd-logind[1494]: Removed session 18. Apr 13 20:11:12.363177 systemd[1]: Started sshd@18-62.238.1.80:22-20.229.252.112:50532.service - OpenSSH per-connection server daemon (20.229.252.112:50532). Apr 13 20:11:12.586506 sshd[6182]: Accepted publickey for core from 20.229.252.112 port 50532 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:12.591846 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:12.611524 systemd-logind[1494]: New session 19 of user core. Apr 13 20:11:12.618696 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:11:13.421910 sshd[6182]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:13.426729 systemd[1]: sshd@18-62.238.1.80:22-20.229.252.112:50532.service: Deactivated successfully. Apr 13 20:11:13.428342 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:11:13.428493 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:11:13.432710 systemd-logind[1494]: Removed session 19. Apr 13 20:11:13.461372 systemd[1]: Started sshd@19-62.238.1.80:22-20.229.252.112:50540.service - OpenSSH per-connection server daemon (20.229.252.112:50540). Apr 13 20:11:13.673479 sshd[6227]: Accepted publickey for core from 20.229.252.112 port 50540 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:13.676750 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:13.694563 systemd-logind[1494]: New session 20 of user core. Apr 13 20:11:13.700658 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:11:14.047206 sshd[6227]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:14.050093 systemd[1]: sshd@19-62.238.1.80:22-20.229.252.112:50540.service: Deactivated successfully. Apr 13 20:11:14.051991 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:11:14.053486 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:11:14.055153 systemd-logind[1494]: Removed session 20. Apr 13 20:11:14.091567 systemd[1]: Started sshd@20-62.238.1.80:22-20.229.252.112:50552.service - OpenSSH per-connection server daemon (20.229.252.112:50552). Apr 13 20:11:14.293381 sshd[6237]: Accepted publickey for core from 20.229.252.112 port 50552 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:14.295384 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:14.304054 systemd-logind[1494]: New session 21 of user core. Apr 13 20:11:14.311582 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:11:14.540566 sshd[6237]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:14.545582 systemd[1]: sshd@20-62.238.1.80:22-20.229.252.112:50552.service: Deactivated successfully. Apr 13 20:11:14.549048 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:11:14.550921 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:11:14.552211 systemd-logind[1494]: Removed session 21. Apr 13 20:11:19.590717 systemd[1]: Started sshd@21-62.238.1.80:22-20.229.252.112:34884.service - OpenSSH per-connection server daemon (20.229.252.112:34884). Apr 13 20:11:19.814825 sshd[6285]: Accepted publickey for core from 20.229.252.112 port 34884 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:19.817636 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:19.823997 systemd-logind[1494]: New session 22 of user core. Apr 13 20:11:19.827579 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:11:20.074181 sshd[6285]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:20.080780 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:11:20.082122 systemd[1]: sshd@21-62.238.1.80:22-20.229.252.112:34884.service: Deactivated successfully. Apr 13 20:11:20.086894 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:11:20.088823 systemd-logind[1494]: Removed session 22. Apr 13 20:11:25.119910 systemd[1]: Started sshd@22-62.238.1.80:22-20.229.252.112:58402.service - OpenSSH per-connection server daemon (20.229.252.112:58402). Apr 13 20:11:25.346088 sshd[6322]: Accepted publickey for core from 20.229.252.112 port 58402 ssh2: RSA SHA256:91lU2UnT75sjO2UvH92swWVfw+E1jDNZ0lBYiMr9qe8 Apr 13 20:11:25.349174 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:11:25.362032 systemd-logind[1494]: New session 23 of user core. Apr 13 20:11:25.369606 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 20:11:25.613967 sshd[6322]: pam_unix(sshd:session): session closed for user core Apr 13 20:11:25.620758 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Apr 13 20:11:25.622111 systemd[1]: sshd@22-62.238.1.80:22-20.229.252.112:58402.service: Deactivated successfully. Apr 13 20:11:25.627452 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 20:11:25.632275 systemd-logind[1494]: Removed session 23. Apr 13 20:11:42.534720 systemd[1]: cri-containerd-2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba.scope: Deactivated successfully. Apr 13 20:11:42.536171 systemd[1]: cri-containerd-2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba.scope: Consumed 3.165s CPU time, 18.2M memory peak, 0B memory swap peak. Apr 13 20:11:42.584544 containerd[1517]: time="2026-04-13T20:11:42.582771080Z" level=info msg="shim disconnected" id=2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba namespace=k8s.io Apr 13 20:11:42.585163 containerd[1517]: time="2026-04-13T20:11:42.584570538Z" level=warning msg="cleaning up after shim disconnected" id=2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba namespace=k8s.io Apr 13 20:11:42.585163 containerd[1517]: time="2026-04-13T20:11:42.584659777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:42.588599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba-rootfs.mount: Deactivated successfully. Apr 13 20:11:42.611429 containerd[1517]: time="2026-04-13T20:11:42.610169106Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:11:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:11:42.832685 kubelet[2594]: I0413 20:11:42.831657 2594 scope.go:122] "RemoveContainer" containerID="2cdd781644aeb0eba126cc4e7c7b9f23f9781bf712842f1d28be346a1287bcba" Apr 13 20:11:42.835258 containerd[1517]: time="2026-04-13T20:11:42.835167488Z" level=info msg="CreateContainer within sandbox \"6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 13 20:11:42.853969 containerd[1517]: time="2026-04-13T20:11:42.853896540Z" level=info msg="CreateContainer within sandbox \"6ad753419a4348c6904805ed145666c4fe9692136fce90133562deeb30a81ee8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9737d54ad1a597c164b7131851c8c8b8b5a61a4e694ed3efc3f758b3390dd77c\"" Apr 13 20:11:42.857502 containerd[1517]: time="2026-04-13T20:11:42.855038996Z" level=info msg="StartContainer for \"9737d54ad1a597c164b7131851c8c8b8b5a61a4e694ed3efc3f758b3390dd77c\"" Apr 13 20:11:42.909490 systemd[1]: Started cri-containerd-9737d54ad1a597c164b7131851c8c8b8b5a61a4e694ed3efc3f758b3390dd77c.scope - libcontainer container 9737d54ad1a597c164b7131851c8c8b8b5a61a4e694ed3efc3f758b3390dd77c. Apr 13 20:11:42.949502 containerd[1517]: time="2026-04-13T20:11:42.949260915Z" level=info msg="StartContainer for \"9737d54ad1a597c164b7131851c8c8b8b5a61a4e694ed3efc3f758b3390dd77c\" returns successfully" Apr 13 20:11:42.954354 kubelet[2594]: E0413 20:11:42.954128 2594 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39782->10.0.0.2:2379: read: connection timed out" Apr 13 20:11:43.227430 systemd[1]: cri-containerd-22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd.scope: Deactivated successfully. Apr 13 20:11:43.227915 systemd[1]: cri-containerd-22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd.scope: Consumed 8.086s CPU time. Apr 13 20:11:43.261752 containerd[1517]: time="2026-04-13T20:11:43.261395257Z" level=info msg="shim disconnected" id=22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd namespace=k8s.io Apr 13 20:11:43.261752 containerd[1517]: time="2026-04-13T20:11:43.261467716Z" level=warning msg="cleaning up after shim disconnected" id=22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd namespace=k8s.io Apr 13 20:11:43.261752 containerd[1517]: time="2026-04-13T20:11:43.261483116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:43.588120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd-rootfs.mount: Deactivated successfully. Apr 13 20:11:43.837560 kubelet[2594]: I0413 20:11:43.837476 2594 scope.go:122] "RemoveContainer" containerID="22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd" Apr 13 20:11:43.839799 containerd[1517]: time="2026-04-13T20:11:43.839728631Z" level=info msg="CreateContainer within sandbox \"f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 13 20:11:43.857127 containerd[1517]: time="2026-04-13T20:11:43.856975194Z" level=info msg="CreateContainer within sandbox \"f02aa4a6b8181241717df3deb0b5c6cb47e105d967f3694533c9d98bc20e9578\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57\"" Apr 13 20:11:43.859086 containerd[1517]: time="2026-04-13T20:11:43.857786874Z" level=info msg="StartContainer for \"9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57\"" Apr 13 20:11:43.905543 systemd[1]: Started cri-containerd-9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57.scope - libcontainer container 9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57. Apr 13 20:11:43.928714 containerd[1517]: time="2026-04-13T20:11:43.928670363Z" level=info msg="StartContainer for \"9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57\" returns successfully" Apr 13 20:11:45.514528 kubelet[2594]: E0413 20:11:45.511561 2594 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39408->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-1-0f1354cb62.18a603a176b1d591 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-1-0f1354cb62,UID:74eb50dddacce6ae832ce90b59b1a89b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-1-0f1354cb62,},FirstTimestamp:2026-04-13 20:11:35.060223377 +0000 UTC m=+137.824813685,LastTimestamp:2026-04-13 20:11:35.060223377 +0000 UTC m=+137.824813685,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-1-0f1354cb62,}" Apr 13 20:11:48.349808 systemd[1]: cri-containerd-78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73.scope: Deactivated successfully. Apr 13 20:11:48.350482 systemd[1]: cri-containerd-78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73.scope: Consumed 1.508s CPU time, 14.0M memory peak, 0B memory swap peak. Apr 13 20:11:48.369976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73-rootfs.mount: Deactivated successfully. Apr 13 20:11:48.370556 containerd[1517]: time="2026-04-13T20:11:48.370515887Z" level=info msg="shim disconnected" id=78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73 namespace=k8s.io Apr 13 20:11:48.371346 containerd[1517]: time="2026-04-13T20:11:48.370828883Z" level=warning msg="cleaning up after shim disconnected" id=78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73 namespace=k8s.io Apr 13 20:11:48.371346 containerd[1517]: time="2026-04-13T20:11:48.370841943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:48.855052 kubelet[2594]: I0413 20:11:48.854992 2594 scope.go:122] "RemoveContainer" containerID="78a75da9f35780d9932c48f2ec66dda1f5e3ef9ef96c508ed74582335a570a73" Apr 13 20:11:48.862595 containerd[1517]: time="2026-04-13T20:11:48.862533842Z" level=info msg="CreateContainer within sandbox \"de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 13 20:11:48.886898 containerd[1517]: time="2026-04-13T20:11:48.886847911Z" level=info msg="CreateContainer within sandbox \"de66db91b85ab61eea4662fbf0502128984d58e0e272b25d207092d060b38dc4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"717748bbbaee1d8dccfc3a674229abb741fe1e6b0b74aef54413a123aebf797f\"" Apr 13 20:11:48.889008 containerd[1517]: time="2026-04-13T20:11:48.888763350Z" level=info msg="StartContainer for \"717748bbbaee1d8dccfc3a674229abb741fe1e6b0b74aef54413a123aebf797f\"" Apr 13 20:11:48.940432 systemd[1]: Started cri-containerd-717748bbbaee1d8dccfc3a674229abb741fe1e6b0b74aef54413a123aebf797f.scope - libcontainer container 717748bbbaee1d8dccfc3a674229abb741fe1e6b0b74aef54413a123aebf797f. Apr 13 20:11:48.979195 containerd[1517]: time="2026-04-13T20:11:48.979142756Z" level=info msg="StartContainer for \"717748bbbaee1d8dccfc3a674229abb741fe1e6b0b74aef54413a123aebf797f\" returns successfully" Apr 13 20:11:52.954980 kubelet[2594]: E0413 20:11:52.954761 2594 controller.go:251] "Failed to update lease" err="Put \"https://62.238.1.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-1-0f1354cb62?timeout=10s\": context deadline exceeded" Apr 13 20:11:54.532010 systemd[1]: run-containerd-runc-k8s.io-6409143c9375faadd939dde63f6ad5acfae523e79098fa67089ae33c70546cf5-runc.BFN5KY.mount: Deactivated successfully. Apr 13 20:11:55.116253 systemd[1]: cri-containerd-9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57.scope: Deactivated successfully. Apr 13 20:11:55.160235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57-rootfs.mount: Deactivated successfully. Apr 13 20:11:55.170731 containerd[1517]: time="2026-04-13T20:11:55.170617761Z" level=info msg="shim disconnected" id=9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57 namespace=k8s.io Apr 13 20:11:55.170731 containerd[1517]: time="2026-04-13T20:11:55.170704800Z" level=warning msg="cleaning up after shim disconnected" id=9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57 namespace=k8s.io Apr 13 20:11:55.170731 containerd[1517]: time="2026-04-13T20:11:55.170717530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:11:55.879388 kubelet[2594]: I0413 20:11:55.879311 2594 scope.go:122] "RemoveContainer" containerID="22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd" Apr 13 20:11:55.880451 kubelet[2594]: I0413 20:11:55.880092 2594 scope.go:122] "RemoveContainer" containerID="9e7d2f18b582e62a474d979f16d41625da1d8f6a7702c55d0d0931f14240da57" Apr 13 20:11:55.880451 kubelet[2594]: E0413 20:11:55.880309 2594 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-fn9gz_tigera-operator(bda35074-8f5b-470f-905b-b2c148192419)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-fn9gz" podUID="bda35074-8f5b-470f-905b-b2c148192419" Apr 13 20:11:55.883092 containerd[1517]: time="2026-04-13T20:11:55.882177320Z" level=info msg="RemoveContainer for \"22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd\"" Apr 13 20:11:55.890469 containerd[1517]: time="2026-04-13T20:11:55.890301238Z" level=info msg="RemoveContainer for \"22063a4fdc27a970cb4186335e41f35d853d8108ddcd0c030363ed83e28f31fd\" returns successfully" Apr 13 20:12:02.956469 kubelet[2594]: E0413 20:12:02.955978 2594 controller.go:251] "Failed to update lease" err="Put \"https://62.238.1.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-1-0f1354cb62?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"