Apr 24 23:35:57.044824 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 24 22:11:38 -00 2026 Apr 24 23:35:57.044841 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:35:57.044850 kernel: BIOS-provided physical RAM map: Apr 24 23:35:57.044855 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 24 23:35:57.044859 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 24 23:35:57.044863 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 24 23:35:57.044868 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 24 23:35:57.044873 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 24 23:35:57.044877 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 24 23:35:57.044881 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 24 23:35:57.044886 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 24 23:35:57.044893 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 24 23:35:57.044897 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 24 23:35:57.044901 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 24 23:35:57.044907 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 24 23:35:57.044911 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 23:35:57.044918 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 24 23:35:57.044923 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 24 23:35:57.044927 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 24 23:35:57.044932 kernel: NX (Execute Disable) protection: active Apr 24 23:35:57.044937 kernel: APIC: Static calls initialized Apr 24 23:35:57.044941 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 24 23:35:57.044946 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e84f198 Apr 24 23:35:57.044951 kernel: efi: Remove mem137: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 24 23:35:57.044955 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 24 23:35:57.044960 kernel: SMBIOS 3.0.0 present. Apr 24 23:35:57.044965 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 24 23:35:57.044969 kernel: Hypervisor detected: KVM Apr 24 23:35:57.044976 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 23:35:57.044980 kernel: kvm-clock: using sched offset of 12603882902 cycles Apr 24 23:35:57.044985 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 23:35:57.044991 kernel: tsc: Detected 2396.398 MHz processor Apr 24 23:35:57.044995 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 23:35:57.045000 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 23:35:57.045005 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 24 23:35:57.045010 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 24 23:35:57.045015 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 23:35:57.045021 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 24 23:35:57.045026 kernel: Using GB pages for direct mapping Apr 24 23:35:57.045031 kernel: Secure boot disabled Apr 24 23:35:57.045039 kernel: ACPI: Early table checksum verification disabled Apr 24 23:35:57.045044 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 24 23:35:57.045049 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 24 23:35:57.045054 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:35:57.045061 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:35:57.045066 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 24 23:35:57.045071 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:35:57.045075 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:35:57.045080 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:35:57.045085 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:35:57.045090 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 23:35:57.045098 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 24 23:35:57.045103 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 24 23:35:57.045107 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 24 23:35:57.045112 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 24 23:35:57.045117 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 24 23:35:57.045122 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 24 23:35:57.045127 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 24 23:35:57.045132 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 24 23:35:57.045137 kernel: No NUMA configuration found Apr 24 23:35:57.045144 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 24 23:35:57.045149 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 24 23:35:57.045162 kernel: Zone ranges: Apr 24 23:35:57.045167 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 23:35:57.045172 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 23:35:57.045177 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 24 23:35:57.045182 kernel: Movable zone start for each node Apr 24 23:35:57.045187 kernel: Early memory node ranges Apr 24 23:35:57.045192 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 24 23:35:57.045197 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 24 23:35:57.045205 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 24 23:35:57.045210 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 24 23:35:57.045215 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 24 23:35:57.045220 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 24 23:35:57.045225 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 23:35:57.045230 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 24 23:35:57.045235 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 24 23:35:57.045240 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 24 23:35:57.045244 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 24 23:35:57.045252 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 24 23:35:57.045257 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 23:35:57.045262 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 23:35:57.045267 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 23:35:57.045271 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 23:35:57.045276 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 23:35:57.045281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 23:35:57.045286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 23:35:57.045291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 23:35:57.045298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 23:35:57.045303 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 23:35:57.045308 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 24 23:35:57.045313 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 23:35:57.045318 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 24 23:35:57.045323 kernel: Booting paravirtualized kernel on KVM Apr 24 23:35:57.045328 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 23:35:57.045333 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 23:35:57.045338 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 24 23:35:57.045345 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 24 23:35:57.045350 kernel: pcpu-alloc: [0] 0 1 Apr 24 23:35:57.045355 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 24 23:35:57.045360 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:35:57.045365 kernel: random: crng init done Apr 24 23:35:57.045370 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:35:57.045375 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:35:57.045380 kernel: Fallback order for Node 0: 0 Apr 24 23:35:57.045387 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 24 23:35:57.045392 kernel: Policy zone: Normal Apr 24 23:35:57.045397 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:35:57.045402 kernel: software IO TLB: area num 2. Apr 24 23:35:57.045407 kernel: Memory: 3819392K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 271572K reserved, 0K cma-reserved) Apr 24 23:35:57.045412 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:35:57.045417 kernel: ftrace: allocating 37996 entries in 149 pages Apr 24 23:35:57.045422 kernel: ftrace: allocated 149 pages with 4 groups Apr 24 23:35:57.045427 kernel: Dynamic Preempt: voluntary Apr 24 23:35:57.045434 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:35:57.045440 kernel: rcu: RCU event tracing is enabled. Apr 24 23:35:57.045445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:35:57.045450 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:35:57.045472 kernel: Rude variant of Tasks RCU enabled. Apr 24 23:35:57.045480 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:35:57.045485 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:35:57.045490 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:35:57.045495 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 23:35:57.045500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:35:57.045505 kernel: Console: colour dummy device 80x25 Apr 24 23:35:57.045511 kernel: printk: console [tty0] enabled Apr 24 23:35:57.045518 kernel: printk: console [ttyS0] enabled Apr 24 23:35:57.045523 kernel: ACPI: Core revision 20230628 Apr 24 23:35:57.045533 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 23:35:57.045538 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 23:35:57.045543 kernel: x2apic enabled Apr 24 23:35:57.045550 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 23:35:57.045556 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 23:35:57.045561 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 24 23:35:57.045566 kernel: Calibrating delay loop (skipped) preset value.. 4792.79 BogoMIPS (lpj=2396398) Apr 24 23:35:57.045571 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 23:35:57.045576 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 24 23:35:57.045581 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 24 23:35:57.045587 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 23:35:57.045592 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 24 23:35:57.045599 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 24 23:35:57.045604 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 24 23:35:57.045610 kernel: active return thunk: srso_alias_return_thunk Apr 24 23:35:57.045615 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 24 23:35:57.045620 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 24 23:35:57.045625 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 23:35:57.045630 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 23:35:57.045635 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 23:35:57.045640 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 23:35:57.045648 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 24 23:35:57.045653 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 24 23:35:57.045658 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 24 23:35:57.045664 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 23:35:57.045669 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 23:35:57.045674 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 24 23:35:57.045679 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 24 23:35:57.045684 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 24 23:35:57.045689 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 24 23:35:57.045697 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 24 23:35:57.045702 kernel: Freeing SMP alternatives memory: 32K Apr 24 23:35:57.045707 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:35:57.045714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:35:57.045722 kernel: landlock: Up and running. Apr 24 23:35:57.045730 kernel: SELinux: Initializing. Apr 24 23:35:57.045736 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:35:57.045741 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:35:57.045748 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 24 23:35:57.045759 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:35:57.045767 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:35:57.045775 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:35:57.045783 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 24 23:35:57.045790 kernel: ... version: 0 Apr 24 23:35:57.045795 kernel: ... bit width: 48 Apr 24 23:35:57.045800 kernel: ... generic registers: 6 Apr 24 23:35:57.045805 kernel: ... value mask: 0000ffffffffffff Apr 24 23:35:57.045813 kernel: ... max period: 00007fffffffffff Apr 24 23:35:57.045818 kernel: ... fixed-purpose events: 0 Apr 24 23:35:57.045823 kernel: ... event mask: 000000000000003f Apr 24 23:35:57.045828 kernel: signal: max sigframe size: 3376 Apr 24 23:35:57.045833 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:35:57.045839 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:35:57.045844 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:35:57.045849 kernel: smpboot: x86: Booting SMP configuration: Apr 24 23:35:57.045854 kernel: .... node #0, CPUs: #1 Apr 24 23:35:57.045859 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:35:57.045867 kernel: smpboot: Max logical packages: 1 Apr 24 23:35:57.045872 kernel: smpboot: Total of 2 processors activated (9585.59 BogoMIPS) Apr 24 23:35:57.045877 kernel: devtmpfs: initialized Apr 24 23:35:57.045882 kernel: x86/mm: Memory block size: 128MB Apr 24 23:35:57.045887 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 24 23:35:57.045893 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:35:57.045898 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:35:57.045903 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:35:57.045908 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:35:57.045915 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:35:57.045921 kernel: audit: type=2000 audit(1777073755.936:1): state=initialized audit_enabled=0 res=1 Apr 24 23:35:57.045926 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:35:57.045931 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 23:35:57.045936 kernel: cpuidle: using governor menu Apr 24 23:35:57.045941 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:35:57.045946 kernel: dca service started, version 1.12.1 Apr 24 23:35:57.045952 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 24 23:35:57.045957 kernel: PCI: Using configuration type 1 for base access Apr 24 23:35:57.045964 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 23:35:57.045970 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:35:57.045975 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:35:57.045980 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:35:57.045985 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:35:57.045990 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:35:57.045995 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:35:57.046000 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:35:57.046005 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:35:57.046013 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 24 23:35:57.046018 kernel: ACPI: Interpreter enabled Apr 24 23:35:57.046023 kernel: ACPI: PM: (supports S0 S5) Apr 24 23:35:57.046028 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 23:35:57.046034 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 23:35:57.046039 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 23:35:57.046044 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 23:35:57.046049 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:35:57.046214 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:35:57.046322 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 23:35:57.046421 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 23:35:57.046428 kernel: PCI host bridge to bus 0000:00 Apr 24 23:35:57.046552 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 23:35:57.046643 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 23:35:57.046736 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 23:35:57.046830 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 24 23:35:57.046917 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 24 23:35:57.047005 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 24 23:35:57.047091 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:35:57.047208 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 24 23:35:57.047313 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 24 23:35:57.047417 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 24 23:35:57.047529 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 24 23:35:57.047631 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 24 23:35:57.047735 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 24 23:35:57.047835 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 24 23:35:57.047932 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 23:35:57.048038 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.048138 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 24 23:35:57.048250 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.048345 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 24 23:35:57.048466 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.048598 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 24 23:35:57.048709 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.048810 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 24 23:35:57.048915 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.049021 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 24 23:35:57.049133 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.049242 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 24 23:35:57.049357 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.049491 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 24 23:35:57.049612 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.049707 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 24 23:35:57.049808 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 24 23:35:57.049905 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 24 23:35:57.050018 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 24 23:35:57.050121 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 23:35:57.050248 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 24 23:35:57.050345 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 24 23:35:57.050441 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 24 23:35:57.050626 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 24 23:35:57.050726 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 24 23:35:57.050834 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 24 23:35:57.050938 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 24 23:35:57.051038 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 24 23:35:57.051138 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 24 23:35:57.051243 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 24 23:35:57.051340 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 24 23:35:57.051435 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 24 23:35:57.051599 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 24 23:35:57.051704 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 24 23:35:57.051811 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 24 23:35:57.051908 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 24 23:35:57.052015 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 24 23:35:57.052115 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 24 23:35:57.052224 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 24 23:35:57.052323 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 24 23:35:57.052418 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 24 23:35:57.052544 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 24 23:35:57.052672 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 24 23:35:57.052774 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 24 23:35:57.052873 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 24 23:35:57.052968 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 24 23:35:57.053077 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 24 23:35:57.053198 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 24 23:35:57.053304 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 24 23:35:57.053400 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 24 23:35:57.053535 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 24 23:35:57.053634 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 24 23:35:57.053745 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 24 23:35:57.053856 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 24 23:35:57.053961 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 24 23:35:57.054057 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 24 23:35:57.054152 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 24 23:35:57.054260 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 24 23:35:57.054266 kernel: acpiphp: Slot [0] registered Apr 24 23:35:57.054374 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 24 23:35:57.054507 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 24 23:35:57.054614 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 24 23:35:57.054728 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 24 23:35:57.055640 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 24 23:35:57.055770 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 24 23:35:57.055872 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 24 23:35:57.055879 kernel: acpiphp: Slot [0-2] registered Apr 24 23:35:57.055980 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 24 23:35:57.056077 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 24 23:35:57.056188 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 24 23:35:57.056196 kernel: acpiphp: Slot [0-3] registered Apr 24 23:35:57.056294 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 24 23:35:57.056389 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 24 23:35:57.056504 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 24 23:35:57.056511 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 23:35:57.056517 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 23:35:57.056522 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 23:35:57.056528 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 23:35:57.056537 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 23:35:57.056543 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 23:35:57.056549 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 23:35:57.056555 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 23:35:57.056560 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 23:35:57.056566 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 23:35:57.056571 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 23:35:57.056577 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 23:35:57.056582 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 23:35:57.056590 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 23:35:57.056596 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 23:35:57.056601 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 23:35:57.056606 kernel: iommu: Default domain type: Translated Apr 24 23:35:57.056612 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 23:35:57.056617 kernel: efivars: Registered efivars operations Apr 24 23:35:57.056623 kernel: PCI: Using ACPI for IRQ routing Apr 24 23:35:57.056629 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 23:35:57.056635 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 24 23:35:57.056643 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 24 23:35:57.056648 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 24 23:35:57.056654 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 24 23:35:57.056753 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 23:35:57.056850 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 23:35:57.056946 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 23:35:57.056953 kernel: vgaarb: loaded Apr 24 23:35:57.056959 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 23:35:57.056964 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 23:35:57.056973 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 23:35:57.056978 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:35:57.056984 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:35:57.056990 kernel: pnp: PnP ACPI init Apr 24 23:35:57.057101 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 24 23:35:57.057110 kernel: pnp: PnP ACPI: found 5 devices Apr 24 23:35:57.057116 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 23:35:57.057122 kernel: NET: Registered PF_INET protocol family Apr 24 23:35:57.057143 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:35:57.057151 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:35:57.057165 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:35:57.057171 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:35:57.057176 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:35:57.057182 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:35:57.057188 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:35:57.057193 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:35:57.057201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:35:57.057207 kernel: NET: Registered PF_XDP protocol family Apr 24 23:35:57.057316 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 24 23:35:57.057423 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 24 23:35:57.057542 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 24 23:35:57.057640 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 24 23:35:57.057737 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 24 23:35:57.057833 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 24 23:35:57.057934 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 24 23:35:57.058034 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 24 23:35:57.058136 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 24 23:35:57.058245 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 24 23:35:57.058346 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 24 23:35:57.058441 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 24 23:35:57.058558 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 24 23:35:57.058655 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 24 23:35:57.058758 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 24 23:35:57.058854 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 24 23:35:57.058949 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 24 23:35:57.059047 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 24 23:35:57.059144 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 24 23:35:57.059256 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 24 23:35:57.059352 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 24 23:35:57.059448 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 24 23:35:57.061636 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 24 23:35:57.061755 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 24 23:35:57.061855 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 24 23:35:57.061969 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 24 23:35:57.062073 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 24 23:35:57.062179 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 24 23:35:57.062276 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 24 23:35:57.062372 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 24 23:35:57.062517 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 24 23:35:57.062617 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 24 23:35:57.062711 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 24 23:35:57.062805 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 24 23:35:57.062900 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 24 23:35:57.062998 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 24 23:35:57.063093 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 24 23:35:57.063197 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 24 23:35:57.063292 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 23:35:57.063381 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 23:35:57.064580 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 23:35:57.064680 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 24 23:35:57.064769 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 24 23:35:57.064856 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 24 23:35:57.064959 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 24 23:35:57.065053 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 24 23:35:57.065163 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 24 23:35:57.065270 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 24 23:35:57.065363 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 24 23:35:57.065479 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 24 23:35:57.065582 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 24 23:35:57.065679 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 24 23:35:57.065782 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 24 23:35:57.065883 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 24 23:35:57.065986 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 24 23:35:57.066079 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 24 23:35:57.066177 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 24 23:35:57.066277 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 24 23:35:57.066369 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 24 23:35:57.069516 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 24 23:35:57.069641 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 24 23:35:57.069737 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 24 23:35:57.069831 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 24 23:35:57.069839 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 23:35:57.069845 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:35:57.069851 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 23:35:57.069857 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 24 23:35:57.069867 kernel: Initialise system trusted keyrings Apr 24 23:35:57.069873 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:35:57.069879 kernel: Key type asymmetric registered Apr 24 23:35:57.069884 kernel: Asymmetric key parser 'x509' registered Apr 24 23:35:57.069890 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 24 23:35:57.069895 kernel: io scheduler mq-deadline registered Apr 24 23:35:57.069901 kernel: io scheduler kyber registered Apr 24 23:35:57.069907 kernel: io scheduler bfq registered Apr 24 23:35:57.070008 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 24 23:35:57.070124 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 24 23:35:57.070263 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 24 23:35:57.070366 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 24 23:35:57.070477 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 24 23:35:57.070606 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 24 23:35:57.070740 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 24 23:35:57.070842 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 24 23:35:57.070939 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 24 23:35:57.071063 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 24 23:35:57.071188 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 24 23:35:57.071291 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 24 23:35:57.071388 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 24 23:35:57.074472 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 24 23:35:57.074595 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 24 23:35:57.074706 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 24 23:35:57.074735 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 23:35:57.074853 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 24 23:35:57.074971 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 24 23:35:57.074983 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 23:35:57.074993 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 24 23:35:57.075002 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:35:57.075007 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 23:35:57.075013 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 23:35:57.075019 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 23:35:57.075025 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 23:35:57.075139 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 24 23:35:57.075161 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 23:35:57.075273 kernel: rtc_cmos 00:03: registered as rtc0 Apr 24 23:35:57.075382 kernel: rtc_cmos 00:03: setting system clock to 2026-04-24T23:35:56 UTC (1777073756) Apr 24 23:35:57.075555 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 23:35:57.075565 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 24 23:35:57.075572 kernel: efifb: probing for efifb Apr 24 23:35:57.075577 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 24 23:35:57.075587 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 24 23:35:57.075593 kernel: efifb: scrolling: redraw Apr 24 23:35:57.075599 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 24 23:35:57.075604 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 23:35:57.075610 kernel: fb0: EFI VGA frame buffer device Apr 24 23:35:57.075616 kernel: pstore: Using crash dump compression: deflate Apr 24 23:35:57.075621 kernel: pstore: Registered efi_pstore as persistent store backend Apr 24 23:35:57.075627 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:35:57.075632 kernel: Segment Routing with IPv6 Apr 24 23:35:57.075640 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:35:57.075646 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:35:57.075652 kernel: Key type dns_resolver registered Apr 24 23:35:57.075658 kernel: IPI shorthand broadcast: enabled Apr 24 23:35:57.075663 kernel: sched_clock: Marking stable (1412014985, 219397771)->(1686156547, -54743791) Apr 24 23:35:57.075669 kernel: registered taskstats version 1 Apr 24 23:35:57.075675 kernel: Loading compiled-in X.509 certificates Apr 24 23:35:57.075680 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 507f116e6718ec7535b55c873de10edf9b6fe124' Apr 24 23:35:57.075686 kernel: Key type .fscrypt registered Apr 24 23:35:57.075691 kernel: Key type fscrypt-provisioning registered Apr 24 23:35:57.075699 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:35:57.075705 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:35:57.075711 kernel: ima: No architecture policies found Apr 24 23:35:57.075716 kernel: clk: Disabling unused clocks Apr 24 23:35:57.075722 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 24 23:35:57.075728 kernel: Write protecting the kernel read-only data: 36864k Apr 24 23:35:57.075733 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 24 23:35:57.075739 kernel: Run /init as init process Apr 24 23:35:57.075748 kernel: with arguments: Apr 24 23:35:57.075753 kernel: /init Apr 24 23:35:57.075759 kernel: with environment: Apr 24 23:35:57.075764 kernel: HOME=/ Apr 24 23:35:57.075772 kernel: TERM=linux Apr 24 23:35:57.075780 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:35:57.075788 systemd[1]: Detected virtualization kvm. Apr 24 23:35:57.075794 systemd[1]: Detected architecture x86-64. Apr 24 23:35:57.075802 systemd[1]: Running in initrd. Apr 24 23:35:57.075808 systemd[1]: No hostname configured, using default hostname. Apr 24 23:35:57.075814 systemd[1]: Hostname set to . Apr 24 23:35:57.075820 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:35:57.075826 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:35:57.075832 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:35:57.075838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:35:57.075845 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:35:57.075853 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:35:57.075859 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:35:57.075865 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:35:57.075873 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:35:57.075879 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:35:57.075885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:35:57.075893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:35:57.075899 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:35:57.075905 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:35:57.075911 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:35:57.075916 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:35:57.075922 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:35:57.075928 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:35:57.075934 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:35:57.075940 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:35:57.075949 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:35:57.075955 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:35:57.075961 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:35:57.075967 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:35:57.075972 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:35:57.075978 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:35:57.075984 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:35:57.075990 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:35:57.075996 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:35:57.076022 systemd-journald[187]: Collecting audit messages is disabled. Apr 24 23:35:57.076039 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:35:57.076045 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:35:57.076054 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:35:57.076060 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:35:57.076066 systemd-journald[187]: Journal started Apr 24 23:35:57.076080 systemd-journald[187]: Runtime Journal (/run/log/journal/54d06193cd8d4c319d5b0b3312d6194b) is 8.0M, max 76.3M, 68.3M free. Apr 24 23:35:57.079360 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:35:57.080105 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:35:57.079763 systemd-modules-load[189]: Inserted module 'overlay' Apr 24 23:35:57.081125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:35:57.088629 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:35:57.092564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:35:57.094292 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:35:57.108246 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:35:57.108761 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:35:57.109763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:35:57.111635 kernel: Bridge firewalling registered Apr 24 23:35:57.110120 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 24 23:35:57.113623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:35:57.116048 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:35:57.117621 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:35:57.119941 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:35:57.125447 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:35:57.139439 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:35:57.143262 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:35:57.145946 dracut-cmdline[217]: dracut-dracut-053 Apr 24 23:35:57.151830 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c8442747465ed99a522e07b8746f6a7817fb39c2025d7438698e3b90e9c0defb Apr 24 23:35:57.150631 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:35:57.175632 systemd-resolved[233]: Positive Trust Anchors: Apr 24 23:35:57.175646 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:35:57.175668 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:35:57.178303 systemd-resolved[233]: Defaulting to hostname 'linux'. Apr 24 23:35:57.179253 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:35:57.179705 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:35:57.219480 kernel: SCSI subsystem initialized Apr 24 23:35:57.227500 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:35:57.235481 kernel: iscsi: registered transport (tcp) Apr 24 23:35:57.253346 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:35:57.253401 kernel: QLogic iSCSI HBA Driver Apr 24 23:35:57.311358 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:35:57.315599 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:35:57.338521 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:35:57.338567 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:35:57.342170 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:35:57.379485 kernel: raid6: avx512x4 gen() 45145 MB/s Apr 24 23:35:57.397480 kernel: raid6: avx512x2 gen() 46740 MB/s Apr 24 23:35:57.415483 kernel: raid6: avx512x1 gen() 43640 MB/s Apr 24 23:35:57.433477 kernel: raid6: avx2x4 gen() 47600 MB/s Apr 24 23:35:57.451478 kernel: raid6: avx2x2 gen() 49705 MB/s Apr 24 23:35:57.470516 kernel: raid6: avx2x1 gen() 38312 MB/s Apr 24 23:35:57.470604 kernel: raid6: using algorithm avx2x2 gen() 49705 MB/s Apr 24 23:35:57.490620 kernel: raid6: .... xor() 36680 MB/s, rmw enabled Apr 24 23:35:57.490688 kernel: raid6: using avx512x2 recovery algorithm Apr 24 23:35:57.507556 kernel: xor: automatically using best checksumming function avx Apr 24 23:35:57.620523 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:35:57.637434 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:35:57.644595 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:35:57.679140 systemd-udevd[409]: Using default interface naming scheme 'v255'. Apr 24 23:35:57.683007 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:35:57.690718 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:35:57.708027 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Apr 24 23:35:57.751315 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:35:57.757690 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:35:57.828128 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:35:57.840832 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:35:57.867738 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:35:57.868544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:35:57.869508 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:35:57.870189 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:35:57.873756 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:35:57.884650 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:35:57.915481 kernel: scsi host0: Virtio SCSI HBA Apr 24 23:35:57.922975 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 23:35:57.927507 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 23:35:57.943299 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:35:57.943871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:35:57.945408 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:35:57.945755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:35:57.945863 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:35:57.946213 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:35:57.956686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:35:57.965114 kernel: AVX2 version of gcm_enc/dec engaged. Apr 24 23:35:57.965143 kernel: AES CTR mode by8 optimization enabled Apr 24 23:35:57.977660 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 24 23:35:57.977864 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 24 23:35:57.978224 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:35:57.996959 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 23:35:57.997141 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 24 23:35:57.997282 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 23:35:57.997404 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:35:57.997412 kernel: GPT:17805311 != 160006143 Apr 24 23:35:57.997420 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:35:57.997427 kernel: GPT:17805311 != 160006143 Apr 24 23:35:57.997435 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:35:57.997442 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:35:58.002647 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:35:58.005508 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 23:35:58.016480 kernel: ACPI: bus type USB registered Apr 24 23:35:58.028862 kernel: usbcore: registered new interface driver usbfs Apr 24 23:35:58.028887 kernel: usbcore: registered new interface driver hub Apr 24 23:35:58.032482 kernel: usbcore: registered new device driver usb Apr 24 23:35:58.035509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:35:58.036791 kernel: libata version 3.00 loaded. Apr 24 23:35:58.056484 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 23:35:58.065118 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 23:35:58.065146 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 24 23:35:58.069474 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (468) Apr 24 23:35:58.076484 kernel: BTRFS: device fsid 077bb4ac-fe88-409a-8f61-fdf28cadf681 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (463) Apr 24 23:35:58.076504 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 24 23:35:58.084113 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 24 23:35:58.089657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 23:35:58.110270 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 24 23:35:58.110430 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 24 23:35:58.110571 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 24 23:35:58.110687 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 24 23:35:58.110802 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 23:35:58.110919 kernel: hub 1-0:1.0: USB hub found Apr 24 23:35:58.111055 kernel: hub 1-0:1.0: 4 ports detected Apr 24 23:35:58.111181 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 24 23:35:58.111306 kernel: hub 2-0:1.0: USB hub found Apr 24 23:35:58.111429 kernel: scsi host1: ahci Apr 24 23:35:58.111587 kernel: hub 2-0:1.0: 4 ports detected Apr 24 23:35:58.111715 kernel: scsi host2: ahci Apr 24 23:35:58.116604 kernel: scsi host3: ahci Apr 24 23:35:58.116215 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 23:35:58.123498 kernel: scsi host4: ahci Apr 24 23:35:58.124614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:35:58.127511 kernel: scsi host5: ahci Apr 24 23:35:58.127661 kernel: scsi host6: ahci Apr 24 23:35:58.127778 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 50 Apr 24 23:35:58.129615 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 50 Apr 24 23:35:58.131028 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 23:35:58.141211 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 50 Apr 24 23:35:58.141228 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 50 Apr 24 23:35:58.141239 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 50 Apr 24 23:35:58.141249 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 50 Apr 24 23:35:58.142020 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 23:35:58.149641 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:35:58.155487 disk-uuid[580]: Primary Header is updated. Apr 24 23:35:58.155487 disk-uuid[580]: Secondary Entries is updated. Apr 24 23:35:58.155487 disk-uuid[580]: Secondary Header is updated. Apr 24 23:35:58.160483 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:35:58.166492 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:35:58.332534 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 24 23:35:58.467810 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 23:35:58.467914 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 24 23:35:58.467938 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 23:35:58.468529 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 23:35:58.486311 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 24 23:35:58.486372 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 24 23:35:58.486394 kernel: ata1.00: applying bridge limits Apr 24 23:35:58.490813 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 23:35:58.493932 kernel: ata1.00: configured for UDMA/100 Apr 24 23:35:58.499504 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:35:58.504560 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:35:58.529360 kernel: usbcore: registered new interface driver usbhid Apr 24 23:35:58.529482 kernel: usbhid: USB HID core driver Apr 24 23:35:58.538474 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 24 23:35:58.542472 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 24 23:35:58.553499 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 24 23:35:58.553910 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:35:58.568724 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:35:59.170560 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:35:59.171314 disk-uuid[581]: The operation has completed successfully. Apr 24 23:35:59.245580 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:35:59.245693 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:35:59.250591 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:35:59.255187 sh[599]: Success Apr 24 23:35:59.268479 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 24 23:35:59.313127 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:35:59.321598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:35:59.322732 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:35:59.338563 kernel: BTRFS info (device dm-0): first mount of filesystem 077bb4ac-fe88-409a-8f61-fdf28cadf681 Apr 24 23:35:59.338589 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:35:59.343056 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:35:59.343077 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:35:59.345007 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:35:59.354490 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:35:59.356546 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:35:59.357347 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:35:59.369646 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:35:59.373644 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:35:59.390708 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:35:59.390737 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:35:59.390747 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:35:59.399684 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:35:59.399710 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:35:59.410009 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:35:59.413508 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:35:59.418717 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:35:59.424637 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:35:59.473102 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:35:59.481049 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:35:59.486651 ignition[719]: Ignition 2.19.0 Apr 24 23:35:59.486663 ignition[719]: Stage: fetch-offline Apr 24 23:35:59.486701 ignition[719]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:35:59.486710 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:35:59.486791 ignition[719]: parsed url from cmdline: "" Apr 24 23:35:59.489062 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:35:59.486795 ignition[719]: no config URL provided Apr 24 23:35:59.486799 ignition[719]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:35:59.486809 ignition[719]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:35:59.486813 ignition[719]: failed to fetch config: resource requires networking Apr 24 23:35:59.486951 ignition[719]: Ignition finished successfully Apr 24 23:35:59.502217 systemd-networkd[783]: lo: Link UP Apr 24 23:35:59.502225 systemd-networkd[783]: lo: Gained carrier Apr 24 23:35:59.504489 systemd-networkd[783]: Enumeration completed Apr 24 23:35:59.504555 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:35:59.505062 systemd[1]: Reached target network.target - Network. Apr 24 23:35:59.505572 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:35:59.505576 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:35:59.506284 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:35:59.506288 systemd-networkd[783]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:35:59.506787 systemd-networkd[783]: eth0: Link UP Apr 24 23:35:59.506791 systemd-networkd[783]: eth0: Gained carrier Apr 24 23:35:59.506799 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:35:59.511341 systemd-networkd[783]: eth1: Link UP Apr 24 23:35:59.511345 systemd-networkd[783]: eth1: Gained carrier Apr 24 23:35:59.511351 systemd-networkd[783]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:35:59.522617 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:35:59.532021 ignition[787]: Ignition 2.19.0 Apr 24 23:35:59.532030 ignition[787]: Stage: fetch Apr 24 23:35:59.532151 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:35:59.532160 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:35:59.532223 ignition[787]: parsed url from cmdline: "" Apr 24 23:35:59.532227 ignition[787]: no config URL provided Apr 24 23:35:59.532231 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:35:59.532239 ignition[787]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:35:59.532253 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 24 23:35:59.532370 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:35:59.556511 systemd-networkd[783]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 24 23:35:59.578506 systemd-networkd[783]: eth0: DHCPv4 address 65.108.57.84/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 24 23:35:59.733509 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 24 23:35:59.741156 ignition[787]: GET result: OK Apr 24 23:35:59.741278 ignition[787]: parsing config with SHA512: 4405c195732250f31f76490dcdf23a56fcc0a57ffaa6483a8e5aef3c0ce75845d9fd5b130eedfd08555038ba4ba7b13a8cde959790775ed78bf94cdf1af3536d Apr 24 23:35:59.747336 unknown[787]: fetched base config from "system" Apr 24 23:35:59.747962 ignition[787]: fetch: fetch complete Apr 24 23:35:59.747361 unknown[787]: fetched base config from "system" Apr 24 23:35:59.747974 ignition[787]: fetch: fetch passed Apr 24 23:35:59.747380 unknown[787]: fetched user config from "hetzner" Apr 24 23:35:59.748063 ignition[787]: Ignition finished successfully Apr 24 23:35:59.752262 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:35:59.764727 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:35:59.788818 ignition[794]: Ignition 2.19.0 Apr 24 23:35:59.788835 ignition[794]: Stage: kargs Apr 24 23:35:59.789076 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:35:59.789094 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:35:59.793527 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:35:59.790276 ignition[794]: kargs: kargs passed Apr 24 23:35:59.790356 ignition[794]: Ignition finished successfully Apr 24 23:35:59.800704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:35:59.837941 ignition[802]: Ignition 2.19.0 Apr 24 23:35:59.837953 ignition[802]: Stage: disks Apr 24 23:35:59.841854 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:35:59.838126 ignition[802]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:35:59.844322 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:35:59.838137 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:35:59.845217 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:35:59.839384 ignition[802]: disks: disks passed Apr 24 23:35:59.845887 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:35:59.839431 ignition[802]: Ignition finished successfully Apr 24 23:35:59.846477 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:35:59.847044 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:35:59.858588 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:35:59.871668 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 24 23:35:59.873899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:35:59.877539 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:35:59.950512 kernel: EXT4-fs (sda9): mounted filesystem ae73d4a7-3ef8-4c50-8348-4aeb952085ba r/w with ordered data mode. Quota mode: none. Apr 24 23:35:59.951014 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:35:59.951856 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:35:59.956517 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:35:59.958534 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:35:59.961720 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:35:59.962540 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:35:59.963239 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:35:59.965788 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:35:59.972965 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (818) Apr 24 23:35:59.972990 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:35:59.977170 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:35:59.977191 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:35:59.975605 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:35:59.987823 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:35:59.987852 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:35:59.991446 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:00.017278 coreos-metadata[820]: Apr 24 23:36:00.016 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 24 23:36:00.018006 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:36:00.019421 coreos-metadata[820]: Apr 24 23:36:00.018 INFO Fetch successful Apr 24 23:36:00.019421 coreos-metadata[820]: Apr 24 23:36:00.018 INFO wrote hostname ci-4081-3-6-n-61b787660f to /sysroot/etc/hostname Apr 24 23:36:00.022380 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:36:00.024122 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:36:00.028153 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:36:00.031290 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:36:00.103034 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:00.107552 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:36:00.110282 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:36:00.116545 kernel: BTRFS info (device sda6): last unmount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:36:00.134914 ignition[938]: INFO : Ignition 2.19.0 Apr 24 23:36:00.135612 ignition[938]: INFO : Stage: mount Apr 24 23:36:00.136722 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:00.136722 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:36:00.138664 ignition[938]: INFO : mount: mount passed Apr 24 23:36:00.138664 ignition[938]: INFO : Ignition finished successfully Apr 24 23:36:00.139197 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:36:00.144555 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:36:00.145559 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:36:00.338236 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:36:00.346740 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:36:00.371516 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (949) Apr 24 23:36:00.379105 kernel: BTRFS info (device sda6): first mount of filesystem 926930fb-88b5-4cf4-bdd1-3374ab036b7b Apr 24 23:36:00.379177 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 23:36:00.384702 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:36:00.396970 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:36:00.397035 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:36:00.405598 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:36:00.439496 ignition[965]: INFO : Ignition 2.19.0 Apr 24 23:36:00.439496 ignition[965]: INFO : Stage: files Apr 24 23:36:00.441637 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:00.441637 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:36:00.441637 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:36:00.443873 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:36:00.443873 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:36:00.447224 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:36:00.448349 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:36:00.448349 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:36:00.448170 unknown[965]: wrote ssh authorized keys file for user: core Apr 24 23:36:00.452204 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:36:00.452204 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 23:36:00.726995 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:36:00.798815 systemd-networkd[783]: eth0: Gained IPv6LL Apr 24 23:36:01.118806 systemd-networkd[783]: eth1: Gained IPv6LL Apr 24 23:36:01.161941 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 23:36:01.161941 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:36:01.165083 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 23:36:01.560931 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:36:01.886377 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 23:36:01.886377 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:36:01.889195 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:01.889195 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:36:01.889195 ignition[965]: INFO : files: files passed Apr 24 23:36:01.889195 ignition[965]: INFO : Ignition finished successfully Apr 24 23:36:01.892247 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:36:01.901670 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:36:01.912614 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:36:01.919072 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:36:01.919679 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:36:01.927236 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:01.928704 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:01.928704 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:36:01.930960 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:01.932196 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:36:01.935603 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:36:01.964144 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:36:01.964245 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:36:01.965529 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:36:01.966371 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:36:01.966977 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:36:01.968619 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:36:01.982138 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:01.988599 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:36:01.996308 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:01.996872 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:01.997307 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:36:01.998092 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:36:01.998184 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:36:01.999217 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:36:01.999952 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:36:02.000666 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:36:02.001350 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:36:02.002032 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:36:02.002740 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:36:02.003443 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:36:02.004151 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:36:02.004857 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:36:02.005579 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:36:02.006290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:36:02.006364 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:36:02.007388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:02.008157 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:02.008811 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:36:02.008893 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:02.009515 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:36:02.009588 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:36:02.010601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:36:02.010681 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:36:02.011335 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:36:02.011403 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:36:02.012040 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:36:02.012107 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:36:02.024588 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:36:02.027613 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:36:02.028185 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:36:02.028299 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:02.028947 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:36:02.029043 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:36:02.032608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:36:02.032977 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:36:02.040507 ignition[1019]: INFO : Ignition 2.19.0 Apr 24 23:36:02.040507 ignition[1019]: INFO : Stage: umount Apr 24 23:36:02.040507 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:36:02.040507 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:36:02.042020 ignition[1019]: INFO : umount: umount passed Apr 24 23:36:02.042020 ignition[1019]: INFO : Ignition finished successfully Apr 24 23:36:02.043752 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:36:02.043855 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:36:02.045439 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:36:02.045537 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:36:02.046754 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:36:02.046796 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:36:02.047155 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:36:02.047188 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:36:02.047853 systemd[1]: Stopped target network.target - Network. Apr 24 23:36:02.048531 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:36:02.048572 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:36:02.048944 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:36:02.049253 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:36:02.052511 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:02.053178 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:36:02.053843 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:36:02.054493 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:36:02.054531 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:36:02.055224 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:36:02.055260 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:36:02.056585 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:36:02.056647 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:36:02.057297 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:36:02.057336 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:36:02.058059 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:36:02.058860 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:36:02.060529 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:36:02.061033 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:36:02.061123 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:36:02.061532 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 24 23:36:02.062163 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:36:02.062234 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:36:02.065241 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:36:02.065356 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:36:02.065500 systemd-networkd[783]: eth1: DHCPv6 lease lost Apr 24 23:36:02.068193 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:36:02.068317 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:36:02.069807 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:36:02.069860 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:02.082588 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:36:02.083348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:36:02.083718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:36:02.084422 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:36:02.084472 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:02.084807 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:36:02.084843 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:02.085199 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:36:02.085235 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:02.085759 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:02.099773 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:36:02.099885 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:36:02.104074 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:36:02.104232 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:02.105049 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:36:02.105090 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:02.105523 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:36:02.105554 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:02.106136 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:36:02.106175 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:36:02.107159 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:36:02.107197 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:36:02.108221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:36:02.108259 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:36:02.117593 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:36:02.117927 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:36:02.117970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:02.118389 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:36:02.118429 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:02.118833 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:36:02.118881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:02.119242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:02.119276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:02.124806 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:36:02.124928 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:36:02.126101 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:36:02.127275 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:36:02.136206 systemd[1]: Switching root. Apr 24 23:36:02.164523 systemd-journald[187]: Journal stopped Apr 24 23:36:03.274393 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Apr 24 23:36:03.276510 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:36:03.276533 kernel: SELinux: policy capability open_perms=1 Apr 24 23:36:03.276543 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:36:03.276551 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:36:03.276559 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:36:03.276572 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:36:03.276581 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:36:03.276589 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:36:03.276597 kernel: audit: type=1403 audit(1777073762.355:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:36:03.276610 systemd[1]: Successfully loaded SELinux policy in 45.692ms. Apr 24 23:36:03.276634 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.611ms. Apr 24 23:36:03.276644 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:36:03.276670 systemd[1]: Detected virtualization kvm. Apr 24 23:36:03.276688 systemd[1]: Detected architecture x86-64. Apr 24 23:36:03.276699 systemd[1]: Detected first boot. Apr 24 23:36:03.276708 systemd[1]: Hostname set to . Apr 24 23:36:03.276717 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:36:03.276726 zram_generator::config[1061]: No configuration found. Apr 24 23:36:03.276736 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:36:03.276749 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:36:03.276758 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:36:03.276766 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:36:03.276776 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:36:03.276787 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:36:03.276796 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:36:03.276804 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:36:03.276813 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:36:03.276821 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:36:03.276830 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:36:03.276839 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:36:03.276847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:36:03.276859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:36:03.276867 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:36:03.276876 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:36:03.276885 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:36:03.276895 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:36:03.276904 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 23:36:03.276913 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:36:03.276921 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:36:03.276933 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:36:03.276941 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:36:03.276950 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:36:03.276958 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:36:03.276967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:36:03.276976 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:36:03.276985 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:36:03.276994 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:36:03.277005 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:36:03.277014 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:36:03.277022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:36:03.277031 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:36:03.277040 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:36:03.277048 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:36:03.277057 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:36:03.277070 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:36:03.277079 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:36:03.277091 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:36:03.277099 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:36:03.277107 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:36:03.277116 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:36:03.277135 systemd[1]: Reached target machines.target - Containers. Apr 24 23:36:03.277143 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:36:03.277152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:03.277161 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:36:03.277172 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:36:03.277181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:03.277190 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:36:03.277198 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:03.277207 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:36:03.277215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:03.277224 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:36:03.277233 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:36:03.277244 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:36:03.277253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:36:03.277262 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:36:03.277270 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:36:03.277281 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:36:03.277291 kernel: fuse: init (API version 7.39) Apr 24 23:36:03.277300 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:36:03.277309 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:36:03.277320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:36:03.277329 kernel: ACPI: bus type drm_connector registered Apr 24 23:36:03.277337 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:36:03.277346 systemd[1]: Stopped verity-setup.service. Apr 24 23:36:03.277355 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:36:03.277386 systemd-journald[1144]: Collecting audit messages is disabled. Apr 24 23:36:03.277407 kernel: loop: module loaded Apr 24 23:36:03.277416 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:36:03.277428 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:36:03.277437 systemd-journald[1144]: Journal started Apr 24 23:36:03.277453 systemd-journald[1144]: Runtime Journal (/run/log/journal/54d06193cd8d4c319d5b0b3312d6194b) is 8.0M, max 76.3M, 68.3M free. Apr 24 23:36:02.936584 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:36:02.957034 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:36:02.957475 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:36:03.281471 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:36:03.283607 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:36:03.284607 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:36:03.287317 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:36:03.289856 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:36:03.291634 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:36:03.294142 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:36:03.295744 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:36:03.295922 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:36:03.296641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:03.296781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:03.297427 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:36:03.297652 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:36:03.298415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:03.298704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:03.299365 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:36:03.299643 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:36:03.300286 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:03.300492 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:03.301103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:36:03.301777 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:36:03.302404 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:36:03.312334 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:36:03.319527 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:36:03.323610 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:36:03.325545 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:36:03.325570 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:36:03.327344 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:36:03.333834 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:36:03.337731 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:36:03.338185 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:03.343574 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:36:03.347798 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:36:03.348181 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:36:03.356590 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:36:03.357086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:36:03.362615 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:36:03.375227 systemd-journald[1144]: Time spent on flushing to /var/log/journal/54d06193cd8d4c319d5b0b3312d6194b is 81.255ms for 1172 entries. Apr 24 23:36:03.375227 systemd-journald[1144]: System Journal (/var/log/journal/54d06193cd8d4c319d5b0b3312d6194b) is 8.0M, max 584.8M, 576.8M free. Apr 24 23:36:03.492664 systemd-journald[1144]: Received client request to flush runtime journal. Apr 24 23:36:03.492698 kernel: loop0: detected capacity change from 0 to 142488 Apr 24 23:36:03.492719 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:36:03.492735 kernel: loop1: detected capacity change from 0 to 228704 Apr 24 23:36:03.364386 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:36:03.368579 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:36:03.370432 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:36:03.370880 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:36:03.371440 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:36:03.397830 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:36:03.398288 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:36:03.402629 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:36:03.448598 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:36:03.459603 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:36:03.460211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:36:03.479449 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Apr 24 23:36:03.479472 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Apr 24 23:36:03.487204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:36:03.495744 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:36:03.497323 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:36:03.504324 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:36:03.514638 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:36:03.517989 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:36:03.535493 kernel: loop2: detected capacity change from 0 to 8 Apr 24 23:36:03.556582 kernel: loop3: detected capacity change from 0 to 140768 Apr 24 23:36:03.568317 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:36:03.575437 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:36:03.603476 kernel: loop4: detected capacity change from 0 to 142488 Apr 24 23:36:03.603573 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 24 23:36:03.603834 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Apr 24 23:36:03.614617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:36:03.633614 kernel: loop5: detected capacity change from 0 to 228704 Apr 24 23:36:03.653511 kernel: loop6: detected capacity change from 0 to 8 Apr 24 23:36:03.657499 kernel: loop7: detected capacity change from 0 to 140768 Apr 24 23:36:03.674781 (sd-merge)[1209]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 24 23:36:03.676627 (sd-merge)[1209]: Merged extensions into '/usr'. Apr 24 23:36:03.683791 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:36:03.683888 systemd[1]: Reloading... Apr 24 23:36:03.771481 zram_generator::config[1233]: No configuration found. Apr 24 23:36:03.882813 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:36:03.893856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:03.930176 systemd[1]: Reloading finished in 245 ms. Apr 24 23:36:03.962046 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:36:03.963004 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:36:03.963641 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:36:03.973589 systemd[1]: Starting ensure-sysext.service... Apr 24 23:36:03.976625 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:36:03.979905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:36:03.983574 systemd[1]: Reloading requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:36:03.983585 systemd[1]: Reloading... Apr 24 23:36:04.006802 systemd-udevd[1282]: Using default interface naming scheme 'v255'. Apr 24 23:36:04.006802 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:36:04.007074 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:36:04.010939 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:36:04.011156 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 24 23:36:04.011217 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 24 23:36:04.016169 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:36:04.016181 systemd-tmpfiles[1281]: Skipping /boot Apr 24 23:36:04.033355 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:36:04.033443 systemd-tmpfiles[1281]: Skipping /boot Apr 24 23:36:04.079492 zram_generator::config[1313]: No configuration found. Apr 24 23:36:04.206488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 23:36:04.215481 kernel: ACPI: button: Power Button [PWRF] Apr 24 23:36:04.240443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:04.264501 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1320) Apr 24 23:36:04.285481 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:36:04.290761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 23:36:04.290828 systemd[1]: Reloading finished in 306 ms. Apr 24 23:36:04.306809 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:36:04.307873 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:36:04.311475 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 24 23:36:04.313494 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 23:36:04.317412 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 24 23:36:04.317619 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 23:36:04.333585 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 24 23:36:04.334220 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 24 23:36:04.336712 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:36:04.345585 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:04.348972 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:36:04.349604 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:04.353716 kernel: EDAC MC: Ver: 3.0.0 Apr 24 23:36:04.356870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:04.363946 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:04.364754 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 24 23:36:04.364779 kernel: Console: switching to colour dummy device 80x25 Apr 24 23:36:04.366493 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 24 23:36:04.371547 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 24 23:36:04.371585 kernel: [drm] features: -context_init Apr 24 23:36:04.371597 kernel: [drm] number of scanouts: 1 Apr 24 23:36:04.372553 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:04.372726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:04.376124 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:36:04.379715 kernel: [drm] number of cap sets: 0 Apr 24 23:36:04.379739 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 24 23:36:04.380616 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:36:04.387418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:36:04.388545 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 24 23:36:04.388568 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 23:36:04.399245 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 24 23:36:04.412686 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:36:04.414265 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:36:04.416065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:04.416508 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:04.416927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:04.417643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:04.438328 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:04.439144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:04.473939 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:36:04.479887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:36:04.481525 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:36:04.481747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:36:04.489666 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:36:04.492247 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:36:04.495862 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:36:04.498999 augenrules[1424]: No rules Apr 24 23:36:04.499140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:36:04.500032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:36:04.501567 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:36:04.506736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:36:04.508651 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:04.508720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 23:36:04.511607 systemd[1]: Finished ensure-sysext.service. Apr 24 23:36:04.513061 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:04.513595 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:36:04.514024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:36:04.514159 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:36:04.516598 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:36:04.516727 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:36:04.517233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:36:04.517350 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:36:04.518431 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:36:04.519305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:36:04.525999 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:36:04.540788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:36:04.540902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:36:04.546558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:36:04.558526 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:36:04.559470 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:36:04.567053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:36:04.567250 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:04.579708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:36:04.585561 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:36:04.586260 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:36:04.591921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:36:04.600223 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:36:04.611661 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:36:04.632896 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:36:04.637020 systemd-networkd[1399]: lo: Link UP Apr 24 23:36:04.637030 systemd-networkd[1399]: lo: Gained carrier Apr 24 23:36:04.644514 systemd-networkd[1399]: Enumeration completed Apr 24 23:36:04.644603 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:36:04.652539 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:04.652546 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:04.653228 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:04.653232 systemd-networkd[1399]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:36:04.653767 systemd-networkd[1399]: eth0: Link UP Apr 24 23:36:04.653771 systemd-networkd[1399]: eth0: Gained carrier Apr 24 23:36:04.653781 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:04.656653 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:36:04.659668 systemd-networkd[1399]: eth1: Link UP Apr 24 23:36:04.659672 systemd-networkd[1399]: eth1: Gained carrier Apr 24 23:36:04.659683 systemd-networkd[1399]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:36:04.663165 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:36:04.663658 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:36:04.672673 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:36:04.692511 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:36:04.694174 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:36:04.695192 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:36:04.699574 systemd-networkd[1399]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 24 23:36:04.700606 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Apr 24 23:36:04.707658 systemd-resolved[1400]: Positive Trust Anchors: Apr 24 23:36:04.709201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:36:04.709218 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:36:04.709242 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:36:04.716079 systemd-resolved[1400]: Using system hostname 'ci-4081-3-6-n-61b787660f'. Apr 24 23:36:04.718127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:36:04.718512 systemd-networkd[1399]: eth0: DHCPv4 address 65.108.57.84/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 24 23:36:04.718654 systemd[1]: Reached target network.target - Network. Apr 24 23:36:04.719007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:36:04.719372 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:36:04.719819 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:36:04.720231 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:36:04.722254 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Apr 24 23:36:04.723626 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:36:04.724089 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:36:04.724439 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:36:04.724788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:36:04.724810 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:36:04.725175 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:36:04.730536 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:36:04.732139 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:36:04.738007 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:36:04.739803 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:36:04.740289 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:36:04.743040 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:36:04.743397 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:36:04.743778 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:36:04.743806 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:36:04.755535 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:36:04.758184 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:36:04.762617 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:36:04.770620 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:36:04.777997 coreos-metadata[1474]: Apr 24 23:36:04.777 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 24 23:36:04.779407 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:36:04.781508 coreos-metadata[1474]: Apr 24 23:36:04.779 INFO Fetch successful Apr 24 23:36:04.781508 coreos-metadata[1474]: Apr 24 23:36:04.779 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 24 23:36:04.781508 coreos-metadata[1474]: Apr 24 23:36:04.780 INFO Fetch successful Apr 24 23:36:04.780568 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:36:04.783575 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:36:04.785736 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:36:04.787361 jq[1478]: false Apr 24 23:36:04.794659 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 24 23:36:04.799194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:36:04.811579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:36:04.820694 extend-filesystems[1479]: Found loop4 Apr 24 23:36:04.823126 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:36:04.827109 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:36:04.827951 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:36:04.829676 extend-filesystems[1479]: Found loop5 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found loop6 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found loop7 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda1 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda2 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda3 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found usr Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda4 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda6 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda7 Apr 24 23:36:04.829676 extend-filesystems[1479]: Found sda9 Apr 24 23:36:04.829676 extend-filesystems[1479]: Checking size of /dev/sda9 Apr 24 23:36:04.873204 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 24 23:36:04.836651 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:36:04.843129 dbus-daemon[1475]: [system] SELinux support is enabled Apr 24 23:36:04.873535 extend-filesystems[1479]: Resized partition /dev/sda9 Apr 24 23:36:04.851664 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:36:04.873979 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:36:04.867735 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:36:04.878089 jq[1499]: true Apr 24 23:36:04.882480 update_engine[1494]: I20260424 23:36:04.882180 1494 main.cc:92] Flatcar Update Engine starting Apr 24 23:36:04.883261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:36:04.883447 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:36:04.883771 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:36:04.883929 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:36:04.893156 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:36:04.893320 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:36:04.901832 update_engine[1494]: I20260424 23:36:04.901641 1494 update_check_scheduler.cc:74] Next update check in 4m51s Apr 24 23:36:04.908989 jq[1509]: true Apr 24 23:36:04.918561 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1314) Apr 24 23:36:04.946648 (ntainerd)[1515]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:36:04.949894 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:36:04.955557 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:36:04.955588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:36:04.955921 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:36:04.955936 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:36:04.963687 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:36:04.990132 tar[1507]: linux-amd64/LICENSE Apr 24 23:36:05.003121 tar[1507]: linux-amd64/helm Apr 24 23:36:05.013521 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:36:05.018354 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:36:05.040139 systemd-logind[1493]: New seat seat0. Apr 24 23:36:05.049030 systemd-logind[1493]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 23:36:05.049049 systemd-logind[1493]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 23:36:05.049256 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:36:05.087566 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:36:05.089393 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:36:05.100676 systemd[1]: Starting sshkeys.service... Apr 24 23:36:05.111842 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:36:05.120935 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:36:05.180412 containerd[1515]: time="2026-04-24T23:36:05.180349881Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:36:05.182171 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:36:05.182936 coreos-metadata[1551]: Apr 24 23:36:05.181 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 24 23:36:05.185024 coreos-metadata[1551]: Apr 24 23:36:05.184 INFO Fetch successful Apr 24 23:36:05.190370 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:36:05.195519 unknown[1551]: wrote ssh authorized keys file for user: core Apr 24 23:36:05.206136 containerd[1515]: time="2026-04-24T23:36:05.206087328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.207873017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.207905245Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.207924284Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208392466Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208406277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208478566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208488320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208657013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208667309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208676563Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:05.208906 containerd[1515]: time="2026-04-24T23:36:05.208684014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.210748 containerd[1515]: time="2026-04-24T23:36:05.208747329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.210748 containerd[1515]: time="2026-04-24T23:36:05.208982171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:36:05.210748 containerd[1515]: time="2026-04-24T23:36:05.209159988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:36:05.210748 containerd[1515]: time="2026-04-24T23:36:05.209172246Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:36:05.210748 containerd[1515]: time="2026-04-24T23:36:05.209248371Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:36:05.210748 containerd[1515]: time="2026-04-24T23:36:05.209286818Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:36:05.221284 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:36:05.226362 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:36:05.236606 containerd[1515]: time="2026-04-24T23:36:05.236579188Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:36:05.236857 containerd[1515]: time="2026-04-24T23:36:05.236844496Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:36:05.237036 containerd[1515]: time="2026-04-24T23:36:05.237014722Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:36:05.237124 containerd[1515]: time="2026-04-24T23:36:05.237104076Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:36:05.237163 containerd[1515]: time="2026-04-24T23:36:05.237154832Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:36:05.237384 containerd[1515]: time="2026-04-24T23:36:05.237370596Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:36:05.239778 containerd[1515]: time="2026-04-24T23:36:05.239683366Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:36:05.243753 containerd[1515]: time="2026-04-24T23:36:05.243713933Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:36:05.243929 containerd[1515]: time="2026-04-24T23:36:05.243916517Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:36:05.243988 containerd[1515]: time="2026-04-24T23:36:05.243959772Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:36:05.244015 containerd[1515]: time="2026-04-24T23:36:05.243974464Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244088 containerd[1515]: time="2026-04-24T23:36:05.244042016Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244088 containerd[1515]: time="2026-04-24T23:36:05.244053863Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244088 containerd[1515]: time="2026-04-24T23:36:05.244064479Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244088 containerd[1515]: time="2026-04-24T23:36:05.244074725Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244215977Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244230709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244240023Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244255596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244266713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244292341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.244333 containerd[1515]: time="2026-04-24T23:36:05.244302637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244442507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244469157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244482306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244493122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244501876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244512552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244521064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244536788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244546242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244557379Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244573513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244581665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.244589107Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:36:05.251551 containerd[1515]: time="2026-04-24T23:36:05.245211971Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:36:05.247473 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245232832Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245241385Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245250829Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245257940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245266713Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245279562Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:36:05.251799 containerd[1515]: time="2026-04-24T23:36:05.245287725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:36:05.247644 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.245499392Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.245768426Z" level=info msg="Connect containerd service" Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.245797860Z" level=info msg="using legacy CRI server" Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.245803378Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.245892602Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.251685644Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:36:05.251986 containerd[1515]: time="2026-04-24T23:36:05.251958574Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:36:05.252178 containerd[1515]: time="2026-04-24T23:36:05.252006996Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:36:05.252178 containerd[1515]: time="2026-04-24T23:36:05.252046055Z" level=info msg="Start subscribing containerd event" Apr 24 23:36:05.252178 containerd[1515]: time="2026-04-24T23:36:05.252082069Z" level=info msg="Start recovering state" Apr 24 23:36:05.252178 containerd[1515]: time="2026-04-24T23:36:05.252147878Z" level=info msg="Start event monitor" Apr 24 23:36:05.252178 containerd[1515]: time="2026-04-24T23:36:05.252165004Z" level=info msg="Start snapshots syncer" Apr 24 23:36:05.252233 containerd[1515]: time="2026-04-24T23:36:05.252180437Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:36:05.252233 containerd[1515]: time="2026-04-24T23:36:05.252186626Z" level=info msg="Start streaming server" Apr 24 23:36:05.252261 containerd[1515]: time="2026-04-24T23:36:05.252238434Z" level=info msg="containerd successfully booted in 0.074147s" Apr 24 23:36:05.255690 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:36:05.256221 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:36:05.272895 update-ssh-keys[1568]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:36:05.273194 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:36:05.274704 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:36:05.279166 systemd[1]: Finished sshkeys.service. Apr 24 23:36:05.289694 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:36:05.298182 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 24 23:36:05.298712 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 23:36:05.299167 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:36:05.330477 extend-filesystems[1503]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 23:36:05.330477 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 24 23:36:05.330477 extend-filesystems[1503]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 24 23:36:05.336356 extend-filesystems[1479]: Resized filesystem in /dev/sda9 Apr 24 23:36:05.336356 extend-filesystems[1479]: Found sr0 Apr 24 23:36:05.332170 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:36:05.332363 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:36:05.564174 tar[1507]: linux-amd64/README.md Apr 24 23:36:05.573767 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:36:06.558836 systemd-networkd[1399]: eth0: Gained IPv6LL Apr 24 23:36:06.559989 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Apr 24 23:36:06.564894 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:36:06.567388 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:36:06.580967 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:06.587610 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:36:06.623580 systemd-networkd[1399]: eth1: Gained IPv6LL Apr 24 23:36:06.625018 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Apr 24 23:36:06.636036 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:36:07.359904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:07.363807 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:36:07.364074 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:07.365599 systemd[1]: Startup finished in 1.591s (kernel) + 5.555s (initrd) + 5.054s (userspace) = 12.202s. Apr 24 23:36:07.859069 kubelet[1604]: E0424 23:36:07.859008 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:07.865623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:07.865930 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:10.737601 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:36:10.743945 systemd[1]: Started sshd@0-65.108.57.84:22-4.175.71.9:50756.service - OpenSSH per-connection server daemon (4.175.71.9:50756). Apr 24 23:36:10.977044 sshd[1616]: Accepted publickey for core from 4.175.71.9 port 50756 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:10.980794 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:10.995428 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:36:11.005361 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:36:11.007393 systemd-logind[1493]: New session 1 of user core. Apr 24 23:36:11.021948 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:36:11.027805 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:36:11.031155 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:36:11.137252 systemd[1620]: Queued start job for default target default.target. Apr 24 23:36:11.144504 systemd[1620]: Created slice app.slice - User Application Slice. Apr 24 23:36:11.144528 systemd[1620]: Reached target paths.target - Paths. Apr 24 23:36:11.144540 systemd[1620]: Reached target timers.target - Timers. Apr 24 23:36:11.145849 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:36:11.165848 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:36:11.166014 systemd[1620]: Reached target sockets.target - Sockets. Apr 24 23:36:11.166028 systemd[1620]: Reached target basic.target - Basic System. Apr 24 23:36:11.166061 systemd[1620]: Reached target default.target - Main User Target. Apr 24 23:36:11.166102 systemd[1620]: Startup finished in 123ms. Apr 24 23:36:11.166322 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:36:11.173566 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:36:11.357753 systemd[1]: Started sshd@1-65.108.57.84:22-4.175.71.9:50768.service - OpenSSH per-connection server daemon (4.175.71.9:50768). Apr 24 23:36:11.578384 sshd[1631]: Accepted publickey for core from 4.175.71.9 port 50768 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:11.582121 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:11.590710 systemd-logind[1493]: New session 2 of user core. Apr 24 23:36:11.598813 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:36:11.751758 sshd[1631]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:11.757366 systemd[1]: sshd@1-65.108.57.84:22-4.175.71.9:50768.service: Deactivated successfully. Apr 24 23:36:11.761266 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:36:11.764241 systemd-logind[1493]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:36:11.766051 systemd-logind[1493]: Removed session 2. Apr 24 23:36:11.799902 systemd[1]: Started sshd@2-65.108.57.84:22-4.175.71.9:50776.service - OpenSSH per-connection server daemon (4.175.71.9:50776). Apr 24 23:36:12.022761 sshd[1638]: Accepted publickey for core from 4.175.71.9 port 50776 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:12.026319 sshd[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:12.034993 systemd-logind[1493]: New session 3 of user core. Apr 24 23:36:12.040734 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:36:12.188202 sshd[1638]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:12.193301 systemd[1]: sshd@2-65.108.57.84:22-4.175.71.9:50776.service: Deactivated successfully. Apr 24 23:36:12.196162 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:36:12.198654 systemd-logind[1493]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:36:12.200538 systemd-logind[1493]: Removed session 3. Apr 24 23:36:12.241946 systemd[1]: Started sshd@3-65.108.57.84:22-4.175.71.9:50778.service - OpenSSH per-connection server daemon (4.175.71.9:50778). Apr 24 23:36:12.471175 sshd[1645]: Accepted publickey for core from 4.175.71.9 port 50778 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:12.473804 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:12.480901 systemd-logind[1493]: New session 4 of user core. Apr 24 23:36:12.487689 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:36:12.646823 sshd[1645]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:12.652736 systemd[1]: sshd@3-65.108.57.84:22-4.175.71.9:50778.service: Deactivated successfully. Apr 24 23:36:12.656265 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:36:12.658807 systemd-logind[1493]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:36:12.660874 systemd-logind[1493]: Removed session 4. Apr 24 23:36:12.693863 systemd[1]: Started sshd@4-65.108.57.84:22-4.175.71.9:50792.service - OpenSSH per-connection server daemon (4.175.71.9:50792). Apr 24 23:36:12.911505 sshd[1652]: Accepted publickey for core from 4.175.71.9 port 50792 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:12.914103 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:12.922443 systemd-logind[1493]: New session 5 of user core. Apr 24 23:36:12.932707 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:36:13.069213 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:36:13.069997 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:13.094967 sudo[1655]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:13.127832 sshd[1652]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:13.133018 systemd[1]: sshd@4-65.108.57.84:22-4.175.71.9:50792.service: Deactivated successfully. Apr 24 23:36:13.136414 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:36:13.138979 systemd-logind[1493]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:36:13.141281 systemd-logind[1493]: Removed session 5. Apr 24 23:36:13.176833 systemd[1]: Started sshd@5-65.108.57.84:22-4.175.71.9:50798.service - OpenSSH per-connection server daemon (4.175.71.9:50798). Apr 24 23:36:13.407332 sshd[1660]: Accepted publickey for core from 4.175.71.9 port 50798 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:13.408626 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:13.416625 systemd-logind[1493]: New session 6 of user core. Apr 24 23:36:13.423679 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:36:13.549942 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:36:13.550662 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:13.558192 sudo[1664]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:13.571341 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:36:13.572246 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:13.593812 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:13.614049 auditctl[1667]: No rules Apr 24 23:36:13.615050 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:36:13.615449 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:13.628108 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:36:13.679430 augenrules[1685]: No rules Apr 24 23:36:13.681159 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:36:13.683024 sudo[1663]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:13.715623 sshd[1660]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:13.721431 systemd[1]: sshd@5-65.108.57.84:22-4.175.71.9:50798.service: Deactivated successfully. Apr 24 23:36:13.725428 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:36:13.728219 systemd-logind[1493]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:36:13.730255 systemd-logind[1493]: Removed session 6. Apr 24 23:36:13.770809 systemd[1]: Started sshd@6-65.108.57.84:22-4.175.71.9:50804.service - OpenSSH per-connection server daemon (4.175.71.9:50804). Apr 24 23:36:13.998219 sshd[1693]: Accepted publickey for core from 4.175.71.9 port 50804 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:36:14.000977 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:36:14.009016 systemd-logind[1493]: New session 7 of user core. Apr 24 23:36:14.016709 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:36:14.142313 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:36:14.143093 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:36:14.489732 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:36:14.509219 (dockerd)[1712]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:36:14.845551 dockerd[1712]: time="2026-04-24T23:36:14.845423037Z" level=info msg="Starting up" Apr 24 23:36:14.941949 dockerd[1712]: time="2026-04-24T23:36:14.941911069Z" level=info msg="Loading containers: start." Apr 24 23:36:15.033498 kernel: Initializing XFRM netlink socket Apr 24 23:36:15.055442 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Apr 24 23:36:15.906533 systemd-timesyncd[1444]: Contacted time server 136.243.177.133:123 (2.flatcar.pool.ntp.org). Apr 24 23:36:15.906599 systemd-timesyncd[1444]: Initial clock synchronization to Fri 2026-04-24 23:36:15.906116 UTC. Apr 24 23:36:15.906838 systemd-resolved[1400]: Clock change detected. Flushing caches. Apr 24 23:36:15.918261 systemd-networkd[1399]: docker0: Link UP Apr 24 23:36:15.929018 dockerd[1712]: time="2026-04-24T23:36:15.928988221Z" level=info msg="Loading containers: done." Apr 24 23:36:15.947756 dockerd[1712]: time="2026-04-24T23:36:15.947687678Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:36:15.947977 dockerd[1712]: time="2026-04-24T23:36:15.947782711Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:36:15.947977 dockerd[1712]: time="2026-04-24T23:36:15.947891764Z" level=info msg="Daemon has completed initialization" Apr 24 23:36:15.973857 dockerd[1712]: time="2026-04-24T23:36:15.973796262Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:36:15.974083 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:36:16.472198 containerd[1515]: time="2026-04-24T23:36:16.472107116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:36:17.136107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273604723.mount: Deactivated successfully. Apr 24 23:36:18.377934 containerd[1515]: time="2026-04-24T23:36:18.377878857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:18.379044 containerd[1515]: time="2026-04-24T23:36:18.378929403Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30194089" Apr 24 23:36:18.380165 containerd[1515]: time="2026-04-24T23:36:18.379918087Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:18.382046 containerd[1515]: time="2026-04-24T23:36:18.382015073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:18.383104 containerd[1515]: time="2026-04-24T23:36:18.382671949Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.910518935s" Apr 24 23:36:18.383104 containerd[1515]: time="2026-04-24T23:36:18.382696465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 23:36:18.383268 containerd[1515]: time="2026-04-24T23:36:18.383246451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:36:18.915070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:36:18.922458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:19.079705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:19.086114 (kubelet)[1918]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:36:19.114430 kubelet[1918]: E0424 23:36:19.114334 1918 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:36:19.118835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:36:19.119026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:36:19.736357 containerd[1515]: time="2026-04-24T23:36:19.736308758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:19.737500 containerd[1515]: time="2026-04-24T23:36:19.737287917Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171469" Apr 24 23:36:19.738354 containerd[1515]: time="2026-04-24T23:36:19.738316700Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:19.740485 containerd[1515]: time="2026-04-24T23:36:19.740455630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:19.741203 containerd[1515]: time="2026-04-24T23:36:19.741069150Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.357801227s" Apr 24 23:36:19.741203 containerd[1515]: time="2026-04-24T23:36:19.741093617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 23:36:19.741932 containerd[1515]: time="2026-04-24T23:36:19.741912796Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:36:20.847552 containerd[1515]: time="2026-04-24T23:36:20.847502571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:20.848637 containerd[1515]: time="2026-04-24T23:36:20.848605226Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289778" Apr 24 23:36:20.849535 containerd[1515]: time="2026-04-24T23:36:20.849506388Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:20.852363 containerd[1515]: time="2026-04-24T23:36:20.851607801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:20.852363 containerd[1515]: time="2026-04-24T23:36:20.852245097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.110312821s" Apr 24 23:36:20.852363 containerd[1515]: time="2026-04-24T23:36:20.852265568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 23:36:20.852727 containerd[1515]: time="2026-04-24T23:36:20.852709404Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:36:21.926034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2434948890.mount: Deactivated successfully. Apr 24 23:36:22.251714 containerd[1515]: time="2026-04-24T23:36:22.251574965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:22.253105 containerd[1515]: time="2026-04-24T23:36:22.252979983Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010739" Apr 24 23:36:22.254783 containerd[1515]: time="2026-04-24T23:36:22.254049207Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:22.255831 containerd[1515]: time="2026-04-24T23:36:22.255744531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:22.256250 containerd[1515]: time="2026-04-24T23:36:22.256215137Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.403445913s" Apr 24 23:36:22.256306 containerd[1515]: time="2026-04-24T23:36:22.256294977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 23:36:22.256982 containerd[1515]: time="2026-04-24T23:36:22.256959494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:36:22.814598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1935180329.mount: Deactivated successfully. Apr 24 23:36:23.929085 containerd[1515]: time="2026-04-24T23:36:23.929025379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:23.930263 containerd[1515]: time="2026-04-24T23:36:23.930138019Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Apr 24 23:36:23.931308 containerd[1515]: time="2026-04-24T23:36:23.931113182Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:23.933152 containerd[1515]: time="2026-04-24T23:36:23.933121695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:23.933831 containerd[1515]: time="2026-04-24T23:36:23.933795256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.676812427s" Apr 24 23:36:23.933969 containerd[1515]: time="2026-04-24T23:36:23.933881726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 23:36:23.934556 containerd[1515]: time="2026-04-24T23:36:23.934533844Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:36:24.424620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938568779.mount: Deactivated successfully. Apr 24 23:36:24.442316 containerd[1515]: time="2026-04-24T23:36:24.440997521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:24.442871 containerd[1515]: time="2026-04-24T23:36:24.442798333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 24 23:36:24.444754 containerd[1515]: time="2026-04-24T23:36:24.444699565Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:24.447870 containerd[1515]: time="2026-04-24T23:36:24.447706837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:24.449682 containerd[1515]: time="2026-04-24T23:36:24.448764775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 514.200555ms" Apr 24 23:36:24.449682 containerd[1515]: time="2026-04-24T23:36:24.448879487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 23:36:24.450173 containerd[1515]: time="2026-04-24T23:36:24.450107980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:36:25.033253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636039068.mount: Deactivated successfully. Apr 24 23:36:26.015224 containerd[1515]: time="2026-04-24T23:36:26.015175543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:26.016292 containerd[1515]: time="2026-04-24T23:36:26.016096495Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719532" Apr 24 23:36:26.018570 containerd[1515]: time="2026-04-24T23:36:26.017334723Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:26.020815 containerd[1515]: time="2026-04-24T23:36:26.019477298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:26.021513 containerd[1515]: time="2026-04-24T23:36:26.021490599Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.571330891s" Apr 24 23:36:26.021547 containerd[1515]: time="2026-04-24T23:36:26.021517589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 23:36:27.444866 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:27.452991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:27.479623 systemd[1]: Reloading requested from client PID 2088 ('systemctl') (unit session-7.scope)... Apr 24 23:36:27.479644 systemd[1]: Reloading... Apr 24 23:36:27.581869 zram_generator::config[2128]: No configuration found. Apr 24 23:36:27.671854 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:27.731862 systemd[1]: Reloading finished in 251 ms. Apr 24 23:36:27.780107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:27.783853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:27.791221 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:36:27.791421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:27.795168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:27.951929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:27.960413 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:36:27.992761 kubelet[2184]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:36:27.992761 kubelet[2184]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:36:27.992761 kubelet[2184]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:36:27.992761 kubelet[2184]: I0424 23:36:27.992696 2184 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:36:28.104857 kubelet[2184]: I0424 23:36:28.103339 2184 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:36:28.104857 kubelet[2184]: I0424 23:36:28.103371 2184 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:36:28.104857 kubelet[2184]: I0424 23:36:28.103655 2184 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:36:28.126684 kubelet[2184]: E0424 23:36:28.126470 2184 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://65.108.57.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:36:28.130459 kubelet[2184]: I0424 23:36:28.130181 2184 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:36:28.136184 kubelet[2184]: E0424 23:36:28.136152 2184 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:36:28.136184 kubelet[2184]: I0424 23:36:28.136179 2184 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:36:28.139070 kubelet[2184]: I0424 23:36:28.139046 2184 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:36:28.139840 kubelet[2184]: I0424 23:36:28.139797 2184 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:36:28.139950 kubelet[2184]: I0424 23:36:28.139832 2184 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-61b787660f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:36:28.139950 kubelet[2184]: I0424 23:36:28.139948 2184 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:36:28.140048 kubelet[2184]: I0424 23:36:28.139962 2184 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:36:28.140086 kubelet[2184]: I0424 23:36:28.140065 2184 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:36:28.144040 kubelet[2184]: I0424 23:36:28.144011 2184 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:36:28.144040 kubelet[2184]: I0424 23:36:28.144038 2184 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:36:28.144131 kubelet[2184]: I0424 23:36:28.144058 2184 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:36:28.145692 kubelet[2184]: I0424 23:36:28.145494 2184 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:36:28.148919 kubelet[2184]: E0424 23:36:28.148372 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://65.108.57.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-61b787660f&limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:36:28.148919 kubelet[2184]: E0424 23:36:28.148718 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://65.108.57.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:36:28.149092 kubelet[2184]: I0424 23:36:28.149075 2184 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:36:28.149605 kubelet[2184]: I0424 23:36:28.149583 2184 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:36:28.150612 kubelet[2184]: W0424 23:36:28.150593 2184 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:36:28.154494 kubelet[2184]: I0424 23:36:28.154478 2184 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:36:28.154540 kubelet[2184]: I0424 23:36:28.154528 2184 server.go:1289] "Started kubelet" Apr 24 23:36:28.154686 kubelet[2184]: I0424 23:36:28.154669 2184 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:36:28.157186 kubelet[2184]: I0424 23:36:28.156966 2184 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:36:28.157360 kubelet[2184]: I0424 23:36:28.157329 2184 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:36:28.157714 kubelet[2184]: I0424 23:36:28.157675 2184 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:36:28.159782 kubelet[2184]: I0424 23:36:28.159769 2184 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:36:28.163549 kubelet[2184]: E0424 23:36:28.162502 2184 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://65.108.57.84:6443/api/v1/namespaces/default/events\": dial tcp 65.108.57.84:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-61b787660f.18a96f31ea9b2654 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-61b787660f,UID:ci-4081-3-6-n-61b787660f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-61b787660f,},FirstTimestamp:2026-04-24 23:36:28.154488404 +0000 UTC m=+0.190270827,LastTimestamp:2026-04-24 23:36:28.154488404 +0000 UTC m=+0.190270827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-61b787660f,}" Apr 24 23:36:28.164024 kubelet[2184]: I0424 23:36:28.164003 2184 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:36:28.164887 kubelet[2184]: I0424 23:36:28.164869 2184 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:36:28.165031 kubelet[2184]: E0424 23:36:28.165010 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:28.165438 kubelet[2184]: I0424 23:36:28.165416 2184 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:36:28.165476 kubelet[2184]: I0424 23:36:28.165452 2184 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:36:28.168269 kubelet[2184]: E0424 23:36:28.167135 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://65.108.57.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:36:28.168269 kubelet[2184]: E0424 23:36:28.167178 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.57.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-61b787660f?timeout=10s\": dial tcp 65.108.57.84:6443: connect: connection refused" interval="200ms" Apr 24 23:36:28.168269 kubelet[2184]: I0424 23:36:28.167267 2184 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:36:28.168269 kubelet[2184]: I0424 23:36:28.167314 2184 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:36:28.169684 kubelet[2184]: I0424 23:36:28.169212 2184 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:36:28.169775 kubelet[2184]: I0424 23:36:28.169749 2184 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:36:28.174246 kubelet[2184]: E0424 23:36:28.174225 2184 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:36:28.187979 kubelet[2184]: I0424 23:36:28.187948 2184 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:36:28.187979 kubelet[2184]: I0424 23:36:28.187969 2184 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:36:28.188066 kubelet[2184]: I0424 23:36:28.187987 2184 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:36:28.188066 kubelet[2184]: I0424 23:36:28.187994 2184 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:36:28.188066 kubelet[2184]: E0424 23:36:28.188032 2184 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:36:28.194402 kubelet[2184]: E0424 23:36:28.194379 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://65.108.57.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:36:28.197585 kubelet[2184]: I0424 23:36:28.197569 2184 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:36:28.197585 kubelet[2184]: I0424 23:36:28.197581 2184 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:36:28.197651 kubelet[2184]: I0424 23:36:28.197594 2184 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:36:28.207046 kubelet[2184]: I0424 23:36:28.207025 2184 policy_none.go:49] "None policy: Start" Apr 24 23:36:28.207046 kubelet[2184]: I0424 23:36:28.207043 2184 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:36:28.207106 kubelet[2184]: I0424 23:36:28.207054 2184 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:36:28.211851 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:36:28.221764 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:36:28.224319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:36:28.234554 kubelet[2184]: E0424 23:36:28.234528 2184 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:36:28.234714 kubelet[2184]: I0424 23:36:28.234691 2184 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:36:28.234750 kubelet[2184]: I0424 23:36:28.234705 2184 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:36:28.235351 kubelet[2184]: I0424 23:36:28.235252 2184 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:36:28.237153 kubelet[2184]: E0424 23:36:28.236873 2184 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:36:28.237153 kubelet[2184]: E0424 23:36:28.236916 2184 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:28.301322 systemd[1]: Created slice kubepods-burstable-pod7b8cc8f950eb568dfaa7b800480292b2.slice - libcontainer container kubepods-burstable-pod7b8cc8f950eb568dfaa7b800480292b2.slice. Apr 24 23:36:28.307743 kubelet[2184]: E0424 23:36:28.307697 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.313125 systemd[1]: Created slice kubepods-burstable-pod4b6fcd49a576374d3396fa154ed08436.slice - libcontainer container kubepods-burstable-pod4b6fcd49a576374d3396fa154ed08436.slice. Apr 24 23:36:28.314735 kubelet[2184]: E0424 23:36:28.314700 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.327826 systemd[1]: Created slice kubepods-burstable-podb0c95d1870f55f264ef3c8ee248248aa.slice - libcontainer container kubepods-burstable-podb0c95d1870f55f264ef3c8ee248248aa.slice. Apr 24 23:36:28.329505 kubelet[2184]: E0424 23:36:28.329472 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.336599 kubelet[2184]: I0424 23:36:28.336561 2184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.336869 kubelet[2184]: E0424 23:36:28.336848 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.57.84:6443/api/v1/nodes\": dial tcp 65.108.57.84:6443: connect: connection refused" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367565 kubelet[2184]: I0424 23:36:28.367260 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367565 kubelet[2184]: I0424 23:36:28.367304 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367565 kubelet[2184]: I0424 23:36:28.367333 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cc8f950eb568dfaa7b800480292b2-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-61b787660f\" (UID: \"7b8cc8f950eb568dfaa7b800480292b2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367565 kubelet[2184]: I0424 23:36:28.367373 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8cc8f950eb568dfaa7b800480292b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-61b787660f\" (UID: \"7b8cc8f950eb568dfaa7b800480292b2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367565 kubelet[2184]: I0424 23:36:28.367399 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367891 kubelet[2184]: I0424 23:36:28.367421 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367891 kubelet[2184]: I0424 23:36:28.367444 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367891 kubelet[2184]: E0424 23:36:28.367447 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.57.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-61b787660f?timeout=10s\": dial tcp 65.108.57.84:6443: connect: connection refused" interval="400ms" Apr 24 23:36:28.367891 kubelet[2184]: I0424 23:36:28.367470 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0c95d1870f55f264ef3c8ee248248aa-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-61b787660f\" (UID: \"b0c95d1870f55f264ef3c8ee248248aa\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.367891 kubelet[2184]: I0424 23:36:28.367494 2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cc8f950eb568dfaa7b800480292b2-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-61b787660f\" (UID: \"7b8cc8f950eb568dfaa7b800480292b2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.540296 kubelet[2184]: I0424 23:36:28.540250 2184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.540682 kubelet[2184]: E0424 23:36:28.540654 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.57.84:6443/api/v1/nodes\": dial tcp 65.108.57.84:6443: connect: connection refused" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.612206 containerd[1515]: time="2026-04-24T23:36:28.611665580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-61b787660f,Uid:7b8cc8f950eb568dfaa7b800480292b2,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:28.616924 containerd[1515]: time="2026-04-24T23:36:28.616777039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-61b787660f,Uid:4b6fcd49a576374d3396fa154ed08436,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:28.635649 containerd[1515]: time="2026-04-24T23:36:28.635304709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-61b787660f,Uid:b0c95d1870f55f264ef3c8ee248248aa,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:28.768564 kubelet[2184]: E0424 23:36:28.768497 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.57.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-61b787660f?timeout=10s\": dial tcp 65.108.57.84:6443: connect: connection refused" interval="800ms" Apr 24 23:36:28.943720 kubelet[2184]: I0424 23:36:28.943658 2184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:28.944240 kubelet[2184]: E0424 23:36:28.944162 2184 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://65.108.57.84:6443/api/v1/nodes\": dial tcp 65.108.57.84:6443: connect: connection refused" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:29.012663 kubelet[2184]: E0424 23:36:29.012559 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://65.108.57.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:36:29.204767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1980905023.mount: Deactivated successfully. Apr 24 23:36:29.219195 containerd[1515]: time="2026-04-24T23:36:29.219117801Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:36:29.221976 containerd[1515]: time="2026-04-24T23:36:29.221892865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:36:29.223012 containerd[1515]: time="2026-04-24T23:36:29.222940136Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:36:29.224688 containerd[1515]: time="2026-04-24T23:36:29.224595100Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:36:29.228209 containerd[1515]: time="2026-04-24T23:36:29.227398496Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:36:29.228209 containerd[1515]: time="2026-04-24T23:36:29.227735191Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:36:29.228209 containerd[1515]: time="2026-04-24T23:36:29.228146038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 24 23:36:29.231629 containerd[1515]: time="2026-04-24T23:36:29.231578628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:36:29.236741 containerd[1515]: time="2026-04-24T23:36:29.236695065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.789232ms" Apr 24 23:36:29.240737 containerd[1515]: time="2026-04-24T23:36:29.240662257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 628.880232ms" Apr 24 23:36:29.251885 containerd[1515]: time="2026-04-24T23:36:29.251835762Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 616.391173ms" Apr 24 23:36:29.348582 containerd[1515]: time="2026-04-24T23:36:29.348289383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:29.348582 containerd[1515]: time="2026-04-24T23:36:29.348382432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:29.348582 containerd[1515]: time="2026-04-24T23:36:29.348393278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:29.348975 containerd[1515]: time="2026-04-24T23:36:29.348596393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:29.354894 containerd[1515]: time="2026-04-24T23:36:29.354703887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:29.354894 containerd[1515]: time="2026-04-24T23:36:29.354744388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:29.354894 containerd[1515]: time="2026-04-24T23:36:29.354754944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:29.354894 containerd[1515]: time="2026-04-24T23:36:29.354833251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:29.356792 containerd[1515]: time="2026-04-24T23:36:29.356527183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:29.356792 containerd[1515]: time="2026-04-24T23:36:29.356592581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:29.356792 containerd[1515]: time="2026-04-24T23:36:29.356606261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:29.356792 containerd[1515]: time="2026-04-24T23:36:29.356705600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:29.373985 systemd[1]: Started cri-containerd-4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4.scope - libcontainer container 4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4. Apr 24 23:36:29.381321 systemd[1]: Started cri-containerd-090324cb14046f998d349cfc32ebf2b4b372cb96511def7c8006728a2975187e.scope - libcontainer container 090324cb14046f998d349cfc32ebf2b4b372cb96511def7c8006728a2975187e. Apr 24 23:36:29.385501 systemd[1]: Started cri-containerd-5ae64a0d9da370c50115508979211a5cb7a227f8780f4e30b7c7278505c6f1dd.scope - libcontainer container 5ae64a0d9da370c50115508979211a5cb7a227f8780f4e30b7c7278505c6f1dd. Apr 24 23:36:29.425599 containerd[1515]: time="2026-04-24T23:36:29.425567552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-61b787660f,Uid:7b8cc8f950eb568dfaa7b800480292b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"090324cb14046f998d349cfc32ebf2b4b372cb96511def7c8006728a2975187e\"" Apr 24 23:36:29.435841 containerd[1515]: time="2026-04-24T23:36:29.434877411Z" level=info msg="CreateContainer within sandbox \"090324cb14046f998d349cfc32ebf2b4b372cb96511def7c8006728a2975187e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:36:29.435841 containerd[1515]: time="2026-04-24T23:36:29.435581347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-61b787660f,Uid:4b6fcd49a576374d3396fa154ed08436,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4\"" Apr 24 23:36:29.443721 containerd[1515]: time="2026-04-24T23:36:29.443701340Z" level=info msg="CreateContainer within sandbox \"4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:36:29.447171 kubelet[2184]: E0424 23:36:29.447122 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://65.108.57.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-61b787660f&limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:36:29.452415 containerd[1515]: time="2026-04-24T23:36:29.452387613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-61b787660f,Uid:b0c95d1870f55f264ef3c8ee248248aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae64a0d9da370c50115508979211a5cb7a227f8780f4e30b7c7278505c6f1dd\"" Apr 24 23:36:29.456589 containerd[1515]: time="2026-04-24T23:36:29.456535837Z" level=info msg="CreateContainer within sandbox \"5ae64a0d9da370c50115508979211a5cb7a227f8780f4e30b7c7278505c6f1dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:36:29.461105 kubelet[2184]: E0424 23:36:29.461068 2184 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://65.108.57.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 65.108.57.84:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:36:29.463303 containerd[1515]: time="2026-04-24T23:36:29.463262630Z" level=info msg="CreateContainer within sandbox \"090324cb14046f998d349cfc32ebf2b4b372cb96511def7c8006728a2975187e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7b627c4cbf8dc2ba489b8142fe3850763c6f69019a3387ddda39555c3788ab2d\"" Apr 24 23:36:29.463882 containerd[1515]: time="2026-04-24T23:36:29.463772125Z" level=info msg="StartContainer for \"7b627c4cbf8dc2ba489b8142fe3850763c6f69019a3387ddda39555c3788ab2d\"" Apr 24 23:36:29.467295 containerd[1515]: time="2026-04-24T23:36:29.467268601Z" level=info msg="CreateContainer within sandbox \"4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964\"" Apr 24 23:36:29.467543 containerd[1515]: time="2026-04-24T23:36:29.467521030Z" level=info msg="StartContainer for \"e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964\"" Apr 24 23:36:29.476520 containerd[1515]: time="2026-04-24T23:36:29.475882335Z" level=info msg="CreateContainer within sandbox \"5ae64a0d9da370c50115508979211a5cb7a227f8780f4e30b7c7278505c6f1dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8\"" Apr 24 23:36:29.476784 containerd[1515]: time="2026-04-24T23:36:29.476760092Z" level=info msg="StartContainer for \"354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8\"" Apr 24 23:36:29.498931 systemd[1]: Started cri-containerd-7b627c4cbf8dc2ba489b8142fe3850763c6f69019a3387ddda39555c3788ab2d.scope - libcontainer container 7b627c4cbf8dc2ba489b8142fe3850763c6f69019a3387ddda39555c3788ab2d. Apr 24 23:36:29.509926 systemd[1]: Started cri-containerd-e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964.scope - libcontainer container e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964. Apr 24 23:36:29.513538 systemd[1]: Started cri-containerd-354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8.scope - libcontainer container 354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8. Apr 24 23:36:29.555456 containerd[1515]: time="2026-04-24T23:36:29.555358362Z" level=info msg="StartContainer for \"7b627c4cbf8dc2ba489b8142fe3850763c6f69019a3387ddda39555c3788ab2d\" returns successfully" Apr 24 23:36:29.565415 containerd[1515]: time="2026-04-24T23:36:29.565355933Z" level=info msg="StartContainer for \"e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964\" returns successfully" Apr 24 23:36:29.569689 kubelet[2184]: E0424 23:36:29.569651 2184 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://65.108.57.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-61b787660f?timeout=10s\": dial tcp 65.108.57.84:6443: connect: connection refused" interval="1.6s" Apr 24 23:36:29.583644 containerd[1515]: time="2026-04-24T23:36:29.583603311Z" level=info msg="StartContainer for \"354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8\" returns successfully" Apr 24 23:36:29.746896 kubelet[2184]: I0424 23:36:29.746325 2184 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:30.203708 kubelet[2184]: E0424 23:36:30.203446 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:30.204603 kubelet[2184]: E0424 23:36:30.204582 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:30.206056 kubelet[2184]: E0424 23:36:30.206037 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:30.560626 kubelet[2184]: I0424 23:36:30.560422 2184 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:30.560626 kubelet[2184]: E0424 23:36:30.560448 2184 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-61b787660f\": node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:30.568948 kubelet[2184]: E0424 23:36:30.568909 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:30.669648 kubelet[2184]: E0424 23:36:30.669557 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:30.770506 kubelet[2184]: E0424 23:36:30.770443 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:30.871939 kubelet[2184]: E0424 23:36:30.871643 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:30.972591 kubelet[2184]: E0424 23:36:30.972518 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.073490 kubelet[2184]: E0424 23:36:31.073408 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.173733 kubelet[2184]: E0424 23:36:31.173686 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.210137 kubelet[2184]: E0424 23:36:31.209888 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:31.210769 kubelet[2184]: E0424 23:36:31.210213 2184 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-61b787660f\" not found" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:31.273900 kubelet[2184]: E0424 23:36:31.273796 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.374615 kubelet[2184]: E0424 23:36:31.374535 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.475704 kubelet[2184]: E0424 23:36:31.475446 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.576348 kubelet[2184]: E0424 23:36:31.576247 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.677601 kubelet[2184]: E0424 23:36:31.676920 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.778091 kubelet[2184]: E0424 23:36:31.777879 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.878842 kubelet[2184]: E0424 23:36:31.878745 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:31.979457 kubelet[2184]: E0424 23:36:31.979397 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.080707 kubelet[2184]: E0424 23:36:32.080532 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.181598 kubelet[2184]: E0424 23:36:32.181529 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.282639 kubelet[2184]: E0424 23:36:32.282570 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.384070 kubelet[2184]: E0424 23:36:32.383889 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.427511 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-7.scope)... Apr 24 23:36:32.427528 systemd[1]: Reloading... Apr 24 23:36:32.484403 kubelet[2184]: E0424 23:36:32.484365 2184 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.532890 zram_generator::config[2514]: No configuration found. Apr 24 23:36:32.565563 kubelet[2184]: I0424 23:36:32.565273 2184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:32.573957 kubelet[2184]: I0424 23:36:32.573916 2184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-61b787660f" Apr 24 23:36:32.578821 kubelet[2184]: I0424 23:36:32.577662 2184 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:32.616463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:36:32.685951 systemd[1]: Reloading finished in 257 ms. Apr 24 23:36:32.748626 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:32.761238 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:36:32.761486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:32.771208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:36:32.885316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:36:32.889607 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:36:32.928844 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:36:32.928844 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:36:32.928844 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:36:32.929443 kubelet[2559]: I0424 23:36:32.928883 2559 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:36:32.936891 kubelet[2559]: I0424 23:36:32.936730 2559 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:36:32.936891 kubelet[2559]: I0424 23:36:32.936749 2559 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:36:32.937273 kubelet[2559]: I0424 23:36:32.936940 2559 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:36:32.938152 kubelet[2559]: I0424 23:36:32.938000 2559 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:36:32.939867 kubelet[2559]: I0424 23:36:32.939518 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:36:32.942576 kubelet[2559]: E0424 23:36:32.942558 2559 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:36:32.942675 kubelet[2559]: I0424 23:36:32.942667 2559 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:36:32.946596 kubelet[2559]: I0424 23:36:32.946569 2559 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:36:32.946812 kubelet[2559]: I0424 23:36:32.946770 2559 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:36:32.946910 kubelet[2559]: I0424 23:36:32.946793 2559 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-61b787660f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:36:32.946910 kubelet[2559]: I0424 23:36:32.946910 2559 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:36:32.947012 kubelet[2559]: I0424 23:36:32.946918 2559 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:36:32.947035 kubelet[2559]: I0424 23:36:32.947023 2559 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:36:32.947212 kubelet[2559]: I0424 23:36:32.947199 2559 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:36:32.947212 kubelet[2559]: I0424 23:36:32.947212 2559 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:36:32.950234 kubelet[2559]: I0424 23:36:32.947236 2559 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:36:32.950234 kubelet[2559]: I0424 23:36:32.947248 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:36:32.951700 kubelet[2559]: I0424 23:36:32.951604 2559 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:36:32.952743 kubelet[2559]: I0424 23:36:32.952571 2559 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:36:32.956272 kubelet[2559]: I0424 23:36:32.956247 2559 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:36:32.956316 kubelet[2559]: I0424 23:36:32.956279 2559 server.go:1289] "Started kubelet" Apr 24 23:36:32.957429 kubelet[2559]: I0424 23:36:32.957404 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:36:32.957972 kubelet[2559]: I0424 23:36:32.957921 2559 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:36:32.958493 kubelet[2559]: I0424 23:36:32.958474 2559 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:36:32.961824 kubelet[2559]: I0424 23:36:32.961316 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:36:32.961824 kubelet[2559]: I0424 23:36:32.961499 2559 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:36:32.961824 kubelet[2559]: I0424 23:36:32.961637 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:36:32.963370 kubelet[2559]: I0424 23:36:32.963352 2559 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:36:32.963428 kubelet[2559]: E0424 23:36:32.963412 2559 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-61b787660f\" not found" Apr 24 23:36:32.963735 kubelet[2559]: I0424 23:36:32.963719 2559 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:36:32.965030 kubelet[2559]: I0424 23:36:32.963909 2559 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:36:32.966163 kubelet[2559]: I0424 23:36:32.965915 2559 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:36:32.968990 kubelet[2559]: I0424 23:36:32.968887 2559 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:36:32.970578 kubelet[2559]: I0424 23:36:32.970545 2559 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:36:32.973966 kubelet[2559]: I0424 23:36:32.973939 2559 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:36:32.974951 kubelet[2559]: I0424 23:36:32.974931 2559 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:36:32.974951 kubelet[2559]: I0424 23:36:32.974948 2559 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:36:32.975008 kubelet[2559]: I0424 23:36:32.974962 2559 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:36:32.975008 kubelet[2559]: I0424 23:36:32.974968 2559 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:36:32.975008 kubelet[2559]: E0424 23:36:32.975001 2559 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:36:33.026597 kubelet[2559]: I0424 23:36:33.026437 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:36:33.026597 kubelet[2559]: I0424 23:36:33.026573 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:36:33.026597 kubelet[2559]: I0424 23:36:33.026590 2559 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:36:33.026778 kubelet[2559]: I0424 23:36:33.026748 2559 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:36:33.026778 kubelet[2559]: I0424 23:36:33.026761 2559 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:36:33.026778 kubelet[2559]: I0424 23:36:33.026775 2559 policy_none.go:49] "None policy: Start" Apr 24 23:36:33.026862 kubelet[2559]: I0424 23:36:33.026787 2559 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:36:33.026862 kubelet[2559]: I0424 23:36:33.026851 2559 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:36:33.026996 kubelet[2559]: I0424 23:36:33.026968 2559 state_mem.go:75] "Updated machine memory state" Apr 24 23:36:33.030336 kubelet[2559]: E0424 23:36:33.030303 2559 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:36:33.031357 kubelet[2559]: I0424 23:36:33.030454 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:36:33.031357 kubelet[2559]: I0424 23:36:33.030467 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:36:33.031357 kubelet[2559]: I0424 23:36:33.030976 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:36:33.033511 kubelet[2559]: E0424 23:36:33.033128 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:36:33.076315 kubelet[2559]: I0424 23:36:33.076267 2559 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.077101 kubelet[2559]: I0424 23:36:33.076605 2559 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.077101 kubelet[2559]: I0424 23:36:33.076728 2559 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.083666 kubelet[2559]: E0424 23:36:33.083636 2559 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.084839 kubelet[2559]: E0424 23:36:33.084787 2559 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-61b787660f\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.084983 kubelet[2559]: E0424 23:36:33.084955 2559 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-61b787660f\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.135616 kubelet[2559]: I0424 23:36:33.135365 2559 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.143758 kubelet[2559]: I0424 23:36:33.143706 2559 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.143938 kubelet[2559]: I0424 23:36:33.143875 2559 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165727 kubelet[2559]: I0424 23:36:33.165690 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cc8f950eb568dfaa7b800480292b2-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-61b787660f\" (UID: \"7b8cc8f950eb568dfaa7b800480292b2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165727 kubelet[2559]: I0424 23:36:33.165726 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165727 kubelet[2559]: I0424 23:36:33.165741 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b0c95d1870f55f264ef3c8ee248248aa-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-61b787660f\" (UID: \"b0c95d1870f55f264ef3c8ee248248aa\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165878 kubelet[2559]: I0424 23:36:33.165754 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b8cc8f950eb568dfaa7b800480292b2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-61b787660f\" (UID: \"7b8cc8f950eb568dfaa7b800480292b2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165878 kubelet[2559]: I0424 23:36:33.165799 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165878 kubelet[2559]: I0424 23:36:33.165836 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165878 kubelet[2559]: I0424 23:36:33.165847 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165878 kubelet[2559]: I0424 23:36:33.165858 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b6fcd49a576374d3396fa154ed08436-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-61b787660f\" (UID: \"4b6fcd49a576374d3396fa154ed08436\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.165962 kubelet[2559]: I0424 23:36:33.165870 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b8cc8f950eb568dfaa7b800480292b2-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-61b787660f\" (UID: \"7b8cc8f950eb568dfaa7b800480292b2\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:33.949069 kubelet[2559]: I0424 23:36:33.949019 2559 apiserver.go:52] "Watching apiserver" Apr 24 23:36:33.964401 kubelet[2559]: I0424 23:36:33.964370 2559 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:36:34.002280 kubelet[2559]: I0424 23:36:34.002246 2559 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:34.012702 kubelet[2559]: E0424 23:36:34.012665 2559 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-61b787660f\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" Apr 24 23:36:34.035635 kubelet[2559]: I0424 23:36:34.035567 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-61b787660f" podStartSLOduration=2.035550967 podStartE2EDuration="2.035550967s" podCreationTimestamp="2026-04-24 23:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:34.025067057 +0000 UTC m=+1.130969146" watchObservedRunningTime="2026-04-24 23:36:34.035550967 +0000 UTC m=+1.141453056" Apr 24 23:36:34.047925 kubelet[2559]: I0424 23:36:34.047795 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-61b787660f" podStartSLOduration=2.047768939 podStartE2EDuration="2.047768939s" podCreationTimestamp="2026-04-24 23:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:34.036478449 +0000 UTC m=+1.142380538" watchObservedRunningTime="2026-04-24 23:36:34.047768939 +0000 UTC m=+1.153671019" Apr 24 23:36:34.048040 kubelet[2559]: I0424 23:36:34.047927 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-61b787660f" podStartSLOduration=2.047924613 podStartE2EDuration="2.047924613s" podCreationTimestamp="2026-04-24 23:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:34.047760026 +0000 UTC m=+1.153662105" watchObservedRunningTime="2026-04-24 23:36:34.047924613 +0000 UTC m=+1.153826702" Apr 24 23:36:37.633776 kubelet[2559]: I0424 23:36:37.633597 2559 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:36:37.634982 kubelet[2559]: I0424 23:36:37.634477 2559 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:36:37.635088 containerd[1515]: time="2026-04-24T23:36:37.634175213Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:36:38.446183 systemd[1]: Created slice kubepods-besteffort-pod020631f3_a04b_4348_9891_03e5e5f3346a.slice - libcontainer container kubepods-besteffort-pod020631f3_a04b_4348_9891_03e5e5f3346a.slice. Apr 24 23:36:38.506577 kubelet[2559]: I0424 23:36:38.506533 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt67w\" (UniqueName: \"kubernetes.io/projected/020631f3-a04b-4348-9891-03e5e5f3346a-kube-api-access-qt67w\") pod \"kube-proxy-p4bps\" (UID: \"020631f3-a04b-4348-9891-03e5e5f3346a\") " pod="kube-system/kube-proxy-p4bps" Apr 24 23:36:38.506577 kubelet[2559]: I0424 23:36:38.506573 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/020631f3-a04b-4348-9891-03e5e5f3346a-lib-modules\") pod \"kube-proxy-p4bps\" (UID: \"020631f3-a04b-4348-9891-03e5e5f3346a\") " pod="kube-system/kube-proxy-p4bps" Apr 24 23:36:38.506577 kubelet[2559]: I0424 23:36:38.506587 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/020631f3-a04b-4348-9891-03e5e5f3346a-kube-proxy\") pod \"kube-proxy-p4bps\" (UID: \"020631f3-a04b-4348-9891-03e5e5f3346a\") " pod="kube-system/kube-proxy-p4bps" Apr 24 23:36:38.506577 kubelet[2559]: I0424 23:36:38.506602 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/020631f3-a04b-4348-9891-03e5e5f3346a-xtables-lock\") pod \"kube-proxy-p4bps\" (UID: \"020631f3-a04b-4348-9891-03e5e5f3346a\") " pod="kube-system/kube-proxy-p4bps" Apr 24 23:36:38.756263 containerd[1515]: time="2026-04-24T23:36:38.754509761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4bps,Uid:020631f3-a04b-4348-9891-03e5e5f3346a,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:38.784366 systemd[1]: Created slice kubepods-besteffort-podc525af0b_e02f_483f_ac86_e9249dde3046.slice - libcontainer container kubepods-besteffort-podc525af0b_e02f_483f_ac86_e9249dde3046.slice. Apr 24 23:36:38.792081 containerd[1515]: time="2026-04-24T23:36:38.792006863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:38.792081 containerd[1515]: time="2026-04-24T23:36:38.792055586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:38.792186 containerd[1515]: time="2026-04-24T23:36:38.792063317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:38.792186 containerd[1515]: time="2026-04-24T23:36:38.792140323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:38.808464 kubelet[2559]: I0424 23:36:38.808432 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c525af0b-e02f-483f-ac86-e9249dde3046-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-lpn9f\" (UID: \"c525af0b-e02f-483f-ac86-e9249dde3046\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lpn9f" Apr 24 23:36:38.811017 kubelet[2559]: I0424 23:36:38.808657 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgkf\" (UniqueName: \"kubernetes.io/projected/c525af0b-e02f-483f-ac86-e9249dde3046-kube-api-access-fpgkf\") pod \"tigera-operator-6bf85f8dd-lpn9f\" (UID: \"c525af0b-e02f-483f-ac86-e9249dde3046\") " pod="tigera-operator/tigera-operator-6bf85f8dd-lpn9f" Apr 24 23:36:38.811084 systemd[1]: Started cri-containerd-ec8300625d2f8a5531eee5ffea9fe3a51cfcb05e7b6a5d1ee3cc58b7c6834dd6.scope - libcontainer container ec8300625d2f8a5531eee5ffea9fe3a51cfcb05e7b6a5d1ee3cc58b7c6834dd6. Apr 24 23:36:38.828693 containerd[1515]: time="2026-04-24T23:36:38.828628140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p4bps,Uid:020631f3-a04b-4348-9891-03e5e5f3346a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec8300625d2f8a5531eee5ffea9fe3a51cfcb05e7b6a5d1ee3cc58b7c6834dd6\"" Apr 24 23:36:38.833469 containerd[1515]: time="2026-04-24T23:36:38.833442904Z" level=info msg="CreateContainer within sandbox \"ec8300625d2f8a5531eee5ffea9fe3a51cfcb05e7b6a5d1ee3cc58b7c6834dd6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:36:38.846844 containerd[1515]: time="2026-04-24T23:36:38.846748528Z" level=info msg="CreateContainer within sandbox \"ec8300625d2f8a5531eee5ffea9fe3a51cfcb05e7b6a5d1ee3cc58b7c6834dd6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5e3ddb9061bf9ec5e767fc655114c020a2a4e1e913405c15aeb55b02940f8497\"" Apr 24 23:36:38.847253 containerd[1515]: time="2026-04-24T23:36:38.847240726Z" level=info msg="StartContainer for \"5e3ddb9061bf9ec5e767fc655114c020a2a4e1e913405c15aeb55b02940f8497\"" Apr 24 23:36:38.869005 systemd[1]: Started cri-containerd-5e3ddb9061bf9ec5e767fc655114c020a2a4e1e913405c15aeb55b02940f8497.scope - libcontainer container 5e3ddb9061bf9ec5e767fc655114c020a2a4e1e913405c15aeb55b02940f8497. Apr 24 23:36:38.896839 containerd[1515]: time="2026-04-24T23:36:38.895927807Z" level=info msg="StartContainer for \"5e3ddb9061bf9ec5e767fc655114c020a2a4e1e913405c15aeb55b02940f8497\" returns successfully" Apr 24 23:36:39.091086 containerd[1515]: time="2026-04-24T23:36:39.090385024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lpn9f,Uid:c525af0b-e02f-483f-ac86-e9249dde3046,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:36:39.133815 containerd[1515]: time="2026-04-24T23:36:39.133462719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:39.133815 containerd[1515]: time="2026-04-24T23:36:39.133558823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:39.133815 containerd[1515]: time="2026-04-24T23:36:39.133578883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:39.134615 containerd[1515]: time="2026-04-24T23:36:39.134514196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:39.157079 systemd[1]: Started cri-containerd-b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9.scope - libcontainer container b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9. Apr 24 23:36:39.196790 containerd[1515]: time="2026-04-24T23:36:39.196673241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-lpn9f,Uid:c525af0b-e02f-483f-ac86-e9249dde3046,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9\"" Apr 24 23:36:39.199077 containerd[1515]: time="2026-04-24T23:36:39.199034664Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:36:39.420631 kubelet[2559]: I0424 23:36:39.420447 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p4bps" podStartSLOduration=1.420425582 podStartE2EDuration="1.420425582s" podCreationTimestamp="2026-04-24 23:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:39.020454189 +0000 UTC m=+6.126356268" watchObservedRunningTime="2026-04-24 23:36:39.420425582 +0000 UTC m=+6.526327712" Apr 24 23:36:40.619515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727643132.mount: Deactivated successfully. Apr 24 23:36:41.221223 containerd[1515]: time="2026-04-24T23:36:41.221170780Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:41.222322 containerd[1515]: time="2026-04-24T23:36:41.222283921Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 24 23:36:41.223163 containerd[1515]: time="2026-04-24T23:36:41.223133786Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:41.225094 containerd[1515]: time="2026-04-24T23:36:41.225060076Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:41.225570 containerd[1515]: time="2026-04-24T23:36:41.225546235Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.026483369s" Apr 24 23:36:41.225597 containerd[1515]: time="2026-04-24T23:36:41.225572174Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 24 23:36:41.228879 containerd[1515]: time="2026-04-24T23:36:41.228848230Z" level=info msg="CreateContainer within sandbox \"b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:36:41.245814 containerd[1515]: time="2026-04-24T23:36:41.245780465Z" level=info msg="CreateContainer within sandbox \"b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2\"" Apr 24 23:36:41.249214 containerd[1515]: time="2026-04-24T23:36:41.249147687Z" level=info msg="StartContainer for \"60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2\"" Apr 24 23:36:41.277923 systemd[1]: Started cri-containerd-60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2.scope - libcontainer container 60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2. Apr 24 23:36:41.302409 containerd[1515]: time="2026-04-24T23:36:41.302229652Z" level=info msg="StartContainer for \"60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2\" returns successfully" Apr 24 23:36:42.622159 kubelet[2559]: I0424 23:36:42.622088 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-lpn9f" podStartSLOduration=2.593839903 podStartE2EDuration="4.622069322s" podCreationTimestamp="2026-04-24 23:36:38 +0000 UTC" firstStartedPulling="2026-04-24 23:36:39.198178069 +0000 UTC m=+6.304080148" lastFinishedPulling="2026-04-24 23:36:41.226407488 +0000 UTC m=+8.332309567" observedRunningTime="2026-04-24 23:36:42.027647471 +0000 UTC m=+9.133549560" watchObservedRunningTime="2026-04-24 23:36:42.622069322 +0000 UTC m=+9.727971421" Apr 24 23:36:46.651020 sudo[1696]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:46.684836 sshd[1693]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:46.689748 systemd[1]: sshd@6-65.108.57.84:22-4.175.71.9:50804.service: Deactivated successfully. Apr 24 23:36:46.691377 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:36:46.694373 systemd[1]: session-7.scope: Consumed 3.484s CPU time, 159.4M memory peak, 0B memory swap peak. Apr 24 23:36:46.695778 systemd-logind[1493]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:36:46.698078 systemd-logind[1493]: Removed session 7. Apr 24 23:36:48.453849 systemd[1]: Created slice kubepods-besteffort-podd6500cc1_d85a_416c_9b18_82d9f816bfe4.slice - libcontainer container kubepods-besteffort-podd6500cc1_d85a_416c_9b18_82d9f816bfe4.slice. Apr 24 23:36:48.476090 kubelet[2559]: I0424 23:36:48.475992 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6500cc1-d85a-416c-9b18-82d9f816bfe4-tigera-ca-bundle\") pod \"calico-typha-c4d48c589-s4z5x\" (UID: \"d6500cc1-d85a-416c-9b18-82d9f816bfe4\") " pod="calico-system/calico-typha-c4d48c589-s4z5x" Apr 24 23:36:48.476090 kubelet[2559]: I0424 23:36:48.476025 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d6500cc1-d85a-416c-9b18-82d9f816bfe4-typha-certs\") pod \"calico-typha-c4d48c589-s4z5x\" (UID: \"d6500cc1-d85a-416c-9b18-82d9f816bfe4\") " pod="calico-system/calico-typha-c4d48c589-s4z5x" Apr 24 23:36:48.476090 kubelet[2559]: I0424 23:36:48.476040 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89vv\" (UniqueName: \"kubernetes.io/projected/d6500cc1-d85a-416c-9b18-82d9f816bfe4-kube-api-access-h89vv\") pod \"calico-typha-c4d48c589-s4z5x\" (UID: \"d6500cc1-d85a-416c-9b18-82d9f816bfe4\") " pod="calico-system/calico-typha-c4d48c589-s4z5x" Apr 24 23:36:48.518533 systemd[1]: Created slice kubepods-besteffort-pod3a2c1f89_b23d_43d9_848c_14d1f32a38e4.slice - libcontainer container kubepods-besteffort-pod3a2c1f89_b23d_43d9_848c_14d1f32a38e4.slice. Apr 24 23:36:48.577143 kubelet[2559]: I0424 23:36:48.577100 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-cni-log-dir\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577143 kubelet[2559]: I0424 23:36:48.577134 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-node-certs\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577143 kubelet[2559]: I0424 23:36:48.577145 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-var-run-calico\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577143 kubelet[2559]: I0424 23:36:48.577156 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-cni-bin-dir\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577333 kubelet[2559]: I0424 23:36:48.577167 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-flexvol-driver-host\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577333 kubelet[2559]: I0424 23:36:48.577180 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-nodeproc\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577333 kubelet[2559]: I0424 23:36:48.577190 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-tigera-ca-bundle\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577333 kubelet[2559]: I0424 23:36:48.577215 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-xtables-lock\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577333 kubelet[2559]: I0424 23:36:48.577229 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9cv9\" (UniqueName: \"kubernetes.io/projected/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-kube-api-access-w9cv9\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577419 kubelet[2559]: I0424 23:36:48.577262 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-bpffs\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577419 kubelet[2559]: I0424 23:36:48.577273 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-sys-fs\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577419 kubelet[2559]: I0424 23:36:48.577285 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-policysync\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577419 kubelet[2559]: I0424 23:36:48.577297 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-lib-modules\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577419 kubelet[2559]: I0424 23:36:48.577317 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-cni-net-dir\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.577540 kubelet[2559]: I0424 23:36:48.577330 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a2c1f89-b23d-43d9-848c-14d1f32a38e4-var-lib-calico\") pod \"calico-node-j25x6\" (UID: \"3a2c1f89-b23d-43d9-848c-14d1f32a38e4\") " pod="calico-system/calico-node-j25x6" Apr 24 23:36:48.621817 kubelet[2559]: E0424 23:36:48.621739 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:36:48.678462 kubelet[2559]: I0424 23:36:48.678378 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8788446e-4c60-4f01-947d-08e34daf3b75-kubelet-dir\") pod \"csi-node-driver-g5mf4\" (UID: \"8788446e-4c60-4f01-947d-08e34daf3b75\") " pod="calico-system/csi-node-driver-g5mf4" Apr 24 23:36:48.679162 kubelet[2559]: I0424 23:36:48.679131 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8788446e-4c60-4f01-947d-08e34daf3b75-varrun\") pod \"csi-node-driver-g5mf4\" (UID: \"8788446e-4c60-4f01-947d-08e34daf3b75\") " pod="calico-system/csi-node-driver-g5mf4" Apr 24 23:36:48.679223 kubelet[2559]: I0424 23:36:48.679181 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8788446e-4c60-4f01-947d-08e34daf3b75-registration-dir\") pod \"csi-node-driver-g5mf4\" (UID: \"8788446e-4c60-4f01-947d-08e34daf3b75\") " pod="calico-system/csi-node-driver-g5mf4" Apr 24 23:36:48.679223 kubelet[2559]: I0424 23:36:48.679199 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8788446e-4c60-4f01-947d-08e34daf3b75-socket-dir\") pod \"csi-node-driver-g5mf4\" (UID: \"8788446e-4c60-4f01-947d-08e34daf3b75\") " pod="calico-system/csi-node-driver-g5mf4" Apr 24 23:36:48.679223 kubelet[2559]: I0424 23:36:48.679210 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvf7\" (UniqueName: \"kubernetes.io/projected/8788446e-4c60-4f01-947d-08e34daf3b75-kube-api-access-jxvf7\") pod \"csi-node-driver-g5mf4\" (UID: \"8788446e-4c60-4f01-947d-08e34daf3b75\") " pod="calico-system/csi-node-driver-g5mf4" Apr 24 23:36:48.683247 kubelet[2559]: E0424 23:36:48.683090 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.683247 kubelet[2559]: W0424 23:36:48.683112 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.683247 kubelet[2559]: E0424 23:36:48.683134 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.683535 kubelet[2559]: E0424 23:36:48.683499 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.683535 kubelet[2559]: W0424 23:36:48.683508 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.683616 kubelet[2559]: E0424 23:36:48.683517 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.686063 kubelet[2559]: E0424 23:36:48.686035 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.686063 kubelet[2559]: W0424 23:36:48.686052 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.686063 kubelet[2559]: E0424 23:36:48.686067 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.686480 kubelet[2559]: E0424 23:36:48.686451 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.686480 kubelet[2559]: W0424 23:36:48.686465 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.686480 kubelet[2559]: E0424 23:36:48.686473 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.698187 kubelet[2559]: E0424 23:36:48.698160 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.698187 kubelet[2559]: W0424 23:36:48.698175 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.698187 kubelet[2559]: E0424 23:36:48.698185 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.760442 containerd[1515]: time="2026-04-24T23:36:48.760300747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c4d48c589-s4z5x,Uid:d6500cc1-d85a-416c-9b18-82d9f816bfe4,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780152 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.780852 kubelet[2559]: W0424 23:36:48.780169 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780186 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780410 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.780852 kubelet[2559]: W0424 23:36:48.780416 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780423 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780634 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.780852 kubelet[2559]: W0424 23:36:48.780640 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780646 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.780852 kubelet[2559]: E0424 23:36:48.780859 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.781379 kubelet[2559]: W0424 23:36:48.780866 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.781379 kubelet[2559]: E0424 23:36:48.780873 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.781379 kubelet[2559]: E0424 23:36:48.781095 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.781379 kubelet[2559]: W0424 23:36:48.781101 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.781379 kubelet[2559]: E0424 23:36:48.781109 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.781578 kubelet[2559]: E0424 23:36:48.781416 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.781578 kubelet[2559]: W0424 23:36:48.781423 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.781578 kubelet[2559]: E0424 23:36:48.781431 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.781881 kubelet[2559]: E0424 23:36:48.781796 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.781881 kubelet[2559]: W0424 23:36:48.781820 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.781881 kubelet[2559]: E0424 23:36:48.781826 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.782487 kubelet[2559]: E0424 23:36:48.782013 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.782487 kubelet[2559]: W0424 23:36:48.782023 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.782487 kubelet[2559]: E0424 23:36:48.782030 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.782487 kubelet[2559]: E0424 23:36:48.782243 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.782487 kubelet[2559]: W0424 23:36:48.782301 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.782487 kubelet[2559]: E0424 23:36:48.782310 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.782712 kubelet[2559]: E0424 23:36:48.782503 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.782712 kubelet[2559]: W0424 23:36:48.782510 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.782712 kubelet[2559]: E0424 23:36:48.782516 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.782712 kubelet[2559]: E0424 23:36:48.782705 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.782712 kubelet[2559]: W0424 23:36:48.782711 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.782712 kubelet[2559]: E0424 23:36:48.782717 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.783004 kubelet[2559]: E0424 23:36:48.782962 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.783004 kubelet[2559]: W0424 23:36:48.782968 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.783004 kubelet[2559]: E0424 23:36:48.782975 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.783238 kubelet[2559]: E0424 23:36:48.783226 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.783238 kubelet[2559]: W0424 23:36:48.783236 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.783276 kubelet[2559]: E0424 23:36:48.783243 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.784186 kubelet[2559]: E0424 23:36:48.784169 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.784186 kubelet[2559]: W0424 23:36:48.784183 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.784261 kubelet[2559]: E0424 23:36:48.784191 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.784739 kubelet[2559]: E0424 23:36:48.784721 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.784739 kubelet[2559]: W0424 23:36:48.784735 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.784836 kubelet[2559]: E0424 23:36:48.784743 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.785273 kubelet[2559]: E0424 23:36:48.785234 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.785273 kubelet[2559]: W0424 23:36:48.785250 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.785273 kubelet[2559]: E0424 23:36:48.785259 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.786482 kubelet[2559]: E0424 23:36:48.786444 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.786482 kubelet[2559]: W0424 23:36:48.786457 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.786482 kubelet[2559]: E0424 23:36:48.786465 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.786737 kubelet[2559]: E0424 23:36:48.786672 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.786737 kubelet[2559]: W0424 23:36:48.786681 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.786737 kubelet[2559]: E0424 23:36:48.786688 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.786969 kubelet[2559]: E0424 23:36:48.786898 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.786969 kubelet[2559]: W0424 23:36:48.786907 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.786969 kubelet[2559]: E0424 23:36:48.786913 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.787535 kubelet[2559]: E0424 23:36:48.787447 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.787535 kubelet[2559]: W0424 23:36:48.787456 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.787535 kubelet[2559]: E0424 23:36:48.787464 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.788291 kubelet[2559]: E0424 23:36:48.787966 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.788291 kubelet[2559]: W0424 23:36:48.787976 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.788291 kubelet[2559]: E0424 23:36:48.787984 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.788714 kubelet[2559]: E0424 23:36:48.788695 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.788714 kubelet[2559]: W0424 23:36:48.788708 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.788714 kubelet[2559]: E0424 23:36:48.788715 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.788901 kubelet[2559]: E0424 23:36:48.788885 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.788901 kubelet[2559]: W0424 23:36:48.788896 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.788901 kubelet[2559]: E0424 23:36:48.788902 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.789245 kubelet[2559]: E0424 23:36:48.789073 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.789245 kubelet[2559]: W0424 23:36:48.789082 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.789245 kubelet[2559]: E0424 23:36:48.789088 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.789373 containerd[1515]: time="2026-04-24T23:36:48.788977699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:48.789373 containerd[1515]: time="2026-04-24T23:36:48.789031907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:48.789373 containerd[1515]: time="2026-04-24T23:36:48.789041994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.789373 containerd[1515]: time="2026-04-24T23:36:48.789102211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.789461 kubelet[2559]: E0424 23:36:48.789362 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.789461 kubelet[2559]: W0424 23:36:48.789369 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.789461 kubelet[2559]: E0424 23:36:48.789376 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.792248 kubelet[2559]: E0424 23:36:48.792227 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:48.792248 kubelet[2559]: W0424 23:36:48.792241 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:48.792248 kubelet[2559]: E0424 23:36:48.792249 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:48.804939 systemd[1]: Started cri-containerd-756d86928480d0cb1cf6445fcf50e4408c4affe75dfab08e14e62a1b668595df.scope - libcontainer container 756d86928480d0cb1cf6445fcf50e4408c4affe75dfab08e14e62a1b668595df. Apr 24 23:36:48.821544 containerd[1515]: time="2026-04-24T23:36:48.821487509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j25x6,Uid:3a2c1f89-b23d-43d9-848c-14d1f32a38e4,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:48.839726 containerd[1515]: time="2026-04-24T23:36:48.839682601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c4d48c589-s4z5x,Uid:d6500cc1-d85a-416c-9b18-82d9f816bfe4,Namespace:calico-system,Attempt:0,} returns sandbox id \"756d86928480d0cb1cf6445fcf50e4408c4affe75dfab08e14e62a1b668595df\"" Apr 24 23:36:48.842173 containerd[1515]: time="2026-04-24T23:36:48.842152999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:36:48.850059 containerd[1515]: time="2026-04-24T23:36:48.849882967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:48.850059 containerd[1515]: time="2026-04-24T23:36:48.849920387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:48.850059 containerd[1515]: time="2026-04-24T23:36:48.849930153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.850059 containerd[1515]: time="2026-04-24T23:36:48.850009581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.865943 systemd[1]: Started cri-containerd-405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe.scope - libcontainer container 405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe. Apr 24 23:36:48.885872 containerd[1515]: time="2026-04-24T23:36:48.885687736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j25x6,Uid:3a2c1f89-b23d-43d9-848c-14d1f32a38e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\"" Apr 24 23:36:49.976248 kubelet[2559]: E0424 23:36:49.976156 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:36:50.631457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796837352.mount: Deactivated successfully. Apr 24 23:36:50.982873 update_engine[1494]: I20260424 23:36:50.982815 1494 update_attempter.cc:509] Updating boot flags... Apr 24 23:36:51.026956 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3090) Apr 24 23:36:51.094392 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3086) Apr 24 23:36:51.152834 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3086) Apr 24 23:36:51.583934 containerd[1515]: time="2026-04-24T23:36:51.583823744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:51.584838 containerd[1515]: time="2026-04-24T23:36:51.584791457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 24 23:36:51.585669 containerd[1515]: time="2026-04-24T23:36:51.585645258Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:51.587411 containerd[1515]: time="2026-04-24T23:36:51.587392023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:51.588129 containerd[1515]: time="2026-04-24T23:36:51.588106853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.745845389s" Apr 24 23:36:51.588166 containerd[1515]: time="2026-04-24T23:36:51.588130881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 24 23:36:51.589381 containerd[1515]: time="2026-04-24T23:36:51.589356705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:36:51.601543 containerd[1515]: time="2026-04-24T23:36:51.601500465Z" level=info msg="CreateContainer within sandbox \"756d86928480d0cb1cf6445fcf50e4408c4affe75dfab08e14e62a1b668595df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:36:51.611624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount274084956.mount: Deactivated successfully. Apr 24 23:36:51.612739 containerd[1515]: time="2026-04-24T23:36:51.612713362Z" level=info msg="CreateContainer within sandbox \"756d86928480d0cb1cf6445fcf50e4408c4affe75dfab08e14e62a1b668595df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"79fa509cefb91cbff10afa99a3fb7f1122b4e2d7545ba3eae4afdae39fb627b2\"" Apr 24 23:36:51.613888 containerd[1515]: time="2026-04-24T23:36:51.613149868Z" level=info msg="StartContainer for \"79fa509cefb91cbff10afa99a3fb7f1122b4e2d7545ba3eae4afdae39fb627b2\"" Apr 24 23:36:51.637934 systemd[1]: Started cri-containerd-79fa509cefb91cbff10afa99a3fb7f1122b4e2d7545ba3eae4afdae39fb627b2.scope - libcontainer container 79fa509cefb91cbff10afa99a3fb7f1122b4e2d7545ba3eae4afdae39fb627b2. Apr 24 23:36:51.673454 containerd[1515]: time="2026-04-24T23:36:51.673402955Z" level=info msg="StartContainer for \"79fa509cefb91cbff10afa99a3fb7f1122b4e2d7545ba3eae4afdae39fb627b2\" returns successfully" Apr 24 23:36:51.976875 kubelet[2559]: E0424 23:36:51.976036 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:36:52.065015 kubelet[2559]: I0424 23:36:52.063927 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c4d48c589-s4z5x" podStartSLOduration=1.315773198 podStartE2EDuration="4.063905937s" podCreationTimestamp="2026-04-24 23:36:48 +0000 UTC" firstStartedPulling="2026-04-24 23:36:48.840598519 +0000 UTC m=+15.946500608" lastFinishedPulling="2026-04-24 23:36:51.588731268 +0000 UTC m=+18.694633347" observedRunningTime="2026-04-24 23:36:52.063350594 +0000 UTC m=+19.169252723" watchObservedRunningTime="2026-04-24 23:36:52.063905937 +0000 UTC m=+19.169808056" Apr 24 23:36:52.083676 kubelet[2559]: E0424 23:36:52.083438 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.083676 kubelet[2559]: W0424 23:36:52.083462 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.083676 kubelet[2559]: E0424 23:36:52.083485 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.084367 kubelet[2559]: E0424 23:36:52.084329 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.084367 kubelet[2559]: W0424 23:36:52.084352 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.084514 kubelet[2559]: E0424 23:36:52.084371 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.085052 kubelet[2559]: E0424 23:36:52.085010 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.085052 kubelet[2559]: W0424 23:36:52.085036 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.085052 kubelet[2559]: E0424 23:36:52.085051 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.085615 kubelet[2559]: E0424 23:36:52.085579 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.085615 kubelet[2559]: W0424 23:36:52.085600 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.085615 kubelet[2559]: E0424 23:36:52.085616 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.086521 kubelet[2559]: E0424 23:36:52.086474 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.086521 kubelet[2559]: W0424 23:36:52.086499 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.086521 kubelet[2559]: E0424 23:36:52.086520 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.087156 kubelet[2559]: E0424 23:36:52.087112 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.087156 kubelet[2559]: W0424 23:36:52.087141 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.087321 kubelet[2559]: E0424 23:36:52.087165 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.088002 kubelet[2559]: E0424 23:36:52.087770 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.088002 kubelet[2559]: W0424 23:36:52.087792 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.088002 kubelet[2559]: E0424 23:36:52.087845 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.088493 kubelet[2559]: E0424 23:36:52.088301 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.088493 kubelet[2559]: W0424 23:36:52.088319 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.088493 kubelet[2559]: E0424 23:36:52.088335 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.089010 kubelet[2559]: E0424 23:36:52.088986 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.089426 kubelet[2559]: W0424 23:36:52.089168 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.089426 kubelet[2559]: E0424 23:36:52.089191 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.089915 kubelet[2559]: E0424 23:36:52.089875 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.089915 kubelet[2559]: W0424 23:36:52.089905 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.090056 kubelet[2559]: E0424 23:36:52.089928 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.090449 kubelet[2559]: E0424 23:36:52.090410 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.090449 kubelet[2559]: W0424 23:36:52.090439 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.090550 kubelet[2559]: E0424 23:36:52.090459 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.091056 kubelet[2559]: E0424 23:36:52.091021 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.091056 kubelet[2559]: W0424 23:36:52.091045 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.091224 kubelet[2559]: E0424 23:36:52.091061 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.091664 kubelet[2559]: E0424 23:36:52.091608 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.091664 kubelet[2559]: W0424 23:36:52.091637 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.091871 kubelet[2559]: E0424 23:36:52.091673 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.092241 kubelet[2559]: E0424 23:36:52.092207 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.092241 kubelet[2559]: W0424 23:36:52.092231 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.092335 kubelet[2559]: E0424 23:36:52.092250 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.092881 kubelet[2559]: E0424 23:36:52.092760 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.092881 kubelet[2559]: W0424 23:36:52.092785 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.092881 kubelet[2559]: E0424 23:36:52.092828 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.107313 kubelet[2559]: E0424 23:36:52.107266 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.107313 kubelet[2559]: W0424 23:36:52.107293 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.107313 kubelet[2559]: E0424 23:36:52.107314 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.108109 kubelet[2559]: E0424 23:36:52.107879 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.108109 kubelet[2559]: W0424 23:36:52.107902 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.108109 kubelet[2559]: E0424 23:36:52.107922 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.108745 kubelet[2559]: E0424 23:36:52.108413 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.108745 kubelet[2559]: W0424 23:36:52.108430 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.108745 kubelet[2559]: E0424 23:36:52.108447 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.109098 kubelet[2559]: E0424 23:36:52.109067 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.109098 kubelet[2559]: W0424 23:36:52.109092 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.109236 kubelet[2559]: E0424 23:36:52.109108 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.109707 kubelet[2559]: E0424 23:36:52.109662 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.109707 kubelet[2559]: W0424 23:36:52.109693 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.109911 kubelet[2559]: E0424 23:36:52.109711 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.110309 kubelet[2559]: E0424 23:36:52.110268 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.110309 kubelet[2559]: W0424 23:36:52.110299 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.110432 kubelet[2559]: E0424 23:36:52.110321 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.110934 kubelet[2559]: E0424 23:36:52.110900 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.110934 kubelet[2559]: W0424 23:36:52.110923 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.111067 kubelet[2559]: E0424 23:36:52.110939 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.111443 kubelet[2559]: E0424 23:36:52.111402 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.111443 kubelet[2559]: W0424 23:36:52.111430 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.111560 kubelet[2559]: E0424 23:36:52.111448 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.112011 kubelet[2559]: E0424 23:36:52.111977 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.112011 kubelet[2559]: W0424 23:36:52.111999 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.112143 kubelet[2559]: E0424 23:36:52.112022 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.112507 kubelet[2559]: E0424 23:36:52.112465 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.112507 kubelet[2559]: W0424 23:36:52.112494 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.112627 kubelet[2559]: E0424 23:36:52.112511 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.113089 kubelet[2559]: E0424 23:36:52.113047 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.113089 kubelet[2559]: W0424 23:36:52.113076 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.113264 kubelet[2559]: E0424 23:36:52.113094 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.114082 kubelet[2559]: E0424 23:36:52.113923 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.114082 kubelet[2559]: W0424 23:36:52.113944 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.114082 kubelet[2559]: E0424 23:36:52.113961 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.114765 kubelet[2559]: E0424 23:36:52.114735 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.114765 kubelet[2559]: W0424 23:36:52.114761 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.114961 kubelet[2559]: E0424 23:36:52.114780 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.115343 kubelet[2559]: E0424 23:36:52.115303 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.115343 kubelet[2559]: W0424 23:36:52.115327 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.115343 kubelet[2559]: E0424 23:36:52.115346 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.116147 kubelet[2559]: E0424 23:36:52.116119 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.116301 kubelet[2559]: W0424 23:36:52.116258 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.116301 kubelet[2559]: E0424 23:36:52.116293 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.116911 kubelet[2559]: E0424 23:36:52.116876 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.116911 kubelet[2559]: W0424 23:36:52.116899 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.116911 kubelet[2559]: E0424 23:36:52.116916 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.117727 kubelet[2559]: E0424 23:36:52.117701 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.117727 kubelet[2559]: W0424 23:36:52.117723 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.117891 kubelet[2559]: E0424 23:36:52.117741 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:52.118380 kubelet[2559]: E0424 23:36:52.118350 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:52.118380 kubelet[2559]: W0424 23:36:52.118378 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:52.118490 kubelet[2559]: E0424 23:36:52.118399 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.048689 kubelet[2559]: I0424 23:36:53.048640 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:36:53.100102 kubelet[2559]: E0424 23:36:53.100068 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.100102 kubelet[2559]: W0424 23:36:53.100089 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.100102 kubelet[2559]: E0424 23:36:53.100109 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.100356 kubelet[2559]: E0424 23:36:53.100340 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.100356 kubelet[2559]: W0424 23:36:53.100351 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.100408 kubelet[2559]: E0424 23:36:53.100360 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.100641 kubelet[2559]: E0424 23:36:53.100619 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.100641 kubelet[2559]: W0424 23:36:53.100632 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.100641 kubelet[2559]: E0424 23:36:53.100642 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.100951 kubelet[2559]: E0424 23:36:53.100931 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.100951 kubelet[2559]: W0424 23:36:53.100943 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.101020 kubelet[2559]: E0424 23:36:53.100953 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.101203 kubelet[2559]: E0424 23:36:53.101177 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.101203 kubelet[2559]: W0424 23:36:53.101195 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.101460 kubelet[2559]: E0424 23:36:53.101204 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.101499 kubelet[2559]: E0424 23:36:53.101484 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.101529 kubelet[2559]: W0424 23:36:53.101518 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.101555 kubelet[2559]: E0424 23:36:53.101527 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.101779 kubelet[2559]: E0424 23:36:53.101768 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.101779 kubelet[2559]: W0424 23:36:53.101778 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.101871 kubelet[2559]: E0424 23:36:53.101785 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.102053 kubelet[2559]: E0424 23:36:53.102031 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.102053 kubelet[2559]: W0424 23:36:53.102044 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.102053 kubelet[2559]: E0424 23:36:53.102052 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.102283 kubelet[2559]: E0424 23:36:53.102266 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.102283 kubelet[2559]: W0424 23:36:53.102277 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.102353 kubelet[2559]: E0424 23:36:53.102284 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.102532 kubelet[2559]: E0424 23:36:53.102512 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.102532 kubelet[2559]: W0424 23:36:53.102524 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.102532 kubelet[2559]: E0424 23:36:53.102532 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.102770 kubelet[2559]: E0424 23:36:53.102754 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.102770 kubelet[2559]: W0424 23:36:53.102765 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.102858 kubelet[2559]: E0424 23:36:53.102773 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.103032 kubelet[2559]: E0424 23:36:53.103023 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.103065 kubelet[2559]: W0424 23:36:53.103032 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.103065 kubelet[2559]: E0424 23:36:53.103040 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.103269 kubelet[2559]: E0424 23:36:53.103252 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.103269 kubelet[2559]: W0424 23:36:53.103263 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.103334 kubelet[2559]: E0424 23:36:53.103270 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.103522 kubelet[2559]: E0424 23:36:53.103506 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.103522 kubelet[2559]: W0424 23:36:53.103517 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.103582 kubelet[2559]: E0424 23:36:53.103526 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.103788 kubelet[2559]: E0424 23:36:53.103772 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.103788 kubelet[2559]: W0424 23:36:53.103784 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.103882 kubelet[2559]: E0424 23:36:53.103792 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.116161 kubelet[2559]: E0424 23:36:53.116139 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.116161 kubelet[2559]: W0424 23:36:53.116153 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.116256 kubelet[2559]: E0424 23:36:53.116165 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.116406 kubelet[2559]: E0424 23:36:53.116388 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.116406 kubelet[2559]: W0424 23:36:53.116402 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.116470 kubelet[2559]: E0424 23:36:53.116412 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.116773 kubelet[2559]: E0424 23:36:53.116760 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.116773 kubelet[2559]: W0424 23:36:53.116770 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.116918 kubelet[2559]: E0424 23:36:53.116779 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.117104 kubelet[2559]: E0424 23:36:53.117081 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.117104 kubelet[2559]: W0424 23:36:53.117100 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.117179 kubelet[2559]: E0424 23:36:53.117114 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.117439 kubelet[2559]: E0424 23:36:53.117419 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.117439 kubelet[2559]: W0424 23:36:53.117435 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.117505 kubelet[2559]: E0424 23:36:53.117447 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.117771 kubelet[2559]: E0424 23:36:53.117750 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.117771 kubelet[2559]: W0424 23:36:53.117766 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.117852 kubelet[2559]: E0424 23:36:53.117781 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.118148 kubelet[2559]: E0424 23:36:53.118128 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.118148 kubelet[2559]: W0424 23:36:53.118145 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.118218 kubelet[2559]: E0424 23:36:53.118157 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.118511 kubelet[2559]: E0424 23:36:53.118491 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.118511 kubelet[2559]: W0424 23:36:53.118507 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.118579 kubelet[2559]: E0424 23:36:53.118518 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.119252 kubelet[2559]: E0424 23:36:53.119228 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.119252 kubelet[2559]: W0424 23:36:53.119249 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.119321 kubelet[2559]: E0424 23:36:53.119262 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.119949 kubelet[2559]: E0424 23:36:53.119923 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.119949 kubelet[2559]: W0424 23:36:53.119945 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.120026 kubelet[2559]: E0424 23:36:53.119958 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.120517 kubelet[2559]: E0424 23:36:53.120404 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.120517 kubelet[2559]: W0424 23:36:53.120416 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.120517 kubelet[2559]: E0424 23:36:53.120426 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.120838 kubelet[2559]: E0424 23:36:53.120826 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.121004 kubelet[2559]: W0424 23:36:53.120903 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.121004 kubelet[2559]: E0424 23:36:53.120915 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.121232 kubelet[2559]: E0424 23:36:53.121205 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.121232 kubelet[2559]: W0424 23:36:53.121218 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.121232 kubelet[2559]: E0424 23:36:53.121226 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.122382 kubelet[2559]: E0424 23:36:53.121450 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.122382 kubelet[2559]: W0424 23:36:53.121457 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.122382 kubelet[2559]: E0424 23:36:53.121467 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.122382 kubelet[2559]: E0424 23:36:53.121975 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.122382 kubelet[2559]: W0424 23:36:53.121983 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.122382 kubelet[2559]: E0424 23:36:53.122009 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.122382 kubelet[2559]: E0424 23:36:53.122219 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.122382 kubelet[2559]: W0424 23:36:53.122226 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.122382 kubelet[2559]: E0424 23:36:53.122234 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.123224 kubelet[2559]: E0424 23:36:53.123189 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.123486 kubelet[2559]: W0424 23:36:53.123288 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.123486 kubelet[2559]: E0424 23:36:53.123300 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.123791 kubelet[2559]: E0424 23:36:53.123780 2559 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:53.123904 kubelet[2559]: W0424 23:36:53.123872 2559 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:53.123952 kubelet[2559]: E0424 23:36:53.123884 2559 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:53.211330 containerd[1515]: time="2026-04-24T23:36:53.211246119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.212743 containerd[1515]: time="2026-04-24T23:36:53.212686298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 24 23:36:53.213849 containerd[1515]: time="2026-04-24T23:36:53.213795895Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.215644 containerd[1515]: time="2026-04-24T23:36:53.215609947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.216508 containerd[1515]: time="2026-04-24T23:36:53.216192689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.626810693s" Apr 24 23:36:53.216508 containerd[1515]: time="2026-04-24T23:36:53.216232562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 24 23:36:53.220030 containerd[1515]: time="2026-04-24T23:36:53.219995889Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:36:53.240619 containerd[1515]: time="2026-04-24T23:36:53.240579711Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee\"" Apr 24 23:36:53.242160 containerd[1515]: time="2026-04-24T23:36:53.240987315Z" level=info msg="StartContainer for \"94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee\"" Apr 24 23:36:53.263523 systemd[1]: run-containerd-runc-k8s.io-94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee-runc.toPsE6.mount: Deactivated successfully. Apr 24 23:36:53.270908 systemd[1]: Started cri-containerd-94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee.scope - libcontainer container 94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee. Apr 24 23:36:53.297062 containerd[1515]: time="2026-04-24T23:36:53.297020119Z" level=info msg="StartContainer for \"94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee\" returns successfully" Apr 24 23:36:53.308512 systemd[1]: cri-containerd-94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee.scope: Deactivated successfully. Apr 24 23:36:53.387881 containerd[1515]: time="2026-04-24T23:36:53.387758526Z" level=info msg="shim disconnected" id=94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee namespace=k8s.io Apr 24 23:36:53.387881 containerd[1515]: time="2026-04-24T23:36:53.387849660Z" level=warning msg="cleaning up after shim disconnected" id=94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee namespace=k8s.io Apr 24 23:36:53.387881 containerd[1515]: time="2026-04-24T23:36:53.387860707Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:53.595326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94e6a1d0da84790d1dd10135dba505f1e7c883a34c457cd298ebad5c33ba22ee-rootfs.mount: Deactivated successfully. Apr 24 23:36:53.975787 kubelet[2559]: E0424 23:36:53.975706 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:36:54.055138 containerd[1515]: time="2026-04-24T23:36:54.054722817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:36:55.975383 kubelet[2559]: E0424 23:36:55.975349 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:36:57.975776 kubelet[2559]: E0424 23:36:57.975467 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:36:58.211462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1147767796.mount: Deactivated successfully. Apr 24 23:36:58.238493 containerd[1515]: time="2026-04-24T23:36:58.238385739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:58.239507 containerd[1515]: time="2026-04-24T23:36:58.239437060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 24 23:36:58.240486 containerd[1515]: time="2026-04-24T23:36:58.240061013Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:58.241481 containerd[1515]: time="2026-04-24T23:36:58.241450390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:58.242214 containerd[1515]: time="2026-04-24T23:36:58.241884227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.18712225s" Apr 24 23:36:58.242214 containerd[1515]: time="2026-04-24T23:36:58.241908636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 24 23:36:58.247901 containerd[1515]: time="2026-04-24T23:36:58.247872099Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:36:58.263991 containerd[1515]: time="2026-04-24T23:36:58.263964285Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a\"" Apr 24 23:36:58.264603 containerd[1515]: time="2026-04-24T23:36:58.264532041Z" level=info msg="StartContainer for \"32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a\"" Apr 24 23:36:58.290934 systemd[1]: Started cri-containerd-32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a.scope - libcontainer container 32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a. Apr 24 23:36:58.317443 containerd[1515]: time="2026-04-24T23:36:58.317402036Z" level=info msg="StartContainer for \"32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a\" returns successfully" Apr 24 23:36:58.354949 systemd[1]: cri-containerd-32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a.scope: Deactivated successfully. Apr 24 23:36:58.443911 containerd[1515]: time="2026-04-24T23:36:58.443844242Z" level=info msg="shim disconnected" id=32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a namespace=k8s.io Apr 24 23:36:58.443911 containerd[1515]: time="2026-04-24T23:36:58.443896664Z" level=warning msg="cleaning up after shim disconnected" id=32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a namespace=k8s.io Apr 24 23:36:58.443911 containerd[1515]: time="2026-04-24T23:36:58.443906589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:59.072666 containerd[1515]: time="2026-04-24T23:36:59.072310786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:36:59.212649 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32ec6f855307155388c266075396dec0f88f6441fca4a5b0d57953f691095d5a-rootfs.mount: Deactivated successfully. Apr 24 23:36:59.975761 kubelet[2559]: E0424 23:36:59.975687 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:37:01.724229 containerd[1515]: time="2026-04-24T23:37:01.724180332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:01.725266 containerd[1515]: time="2026-04-24T23:37:01.725216288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 24 23:37:01.726233 containerd[1515]: time="2026-04-24T23:37:01.726199672Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:01.729818 containerd[1515]: time="2026-04-24T23:37:01.728466335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:01.729818 containerd[1515]: time="2026-04-24T23:37:01.728816477Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.656232627s" Apr 24 23:37:01.729818 containerd[1515]: time="2026-04-24T23:37:01.728839713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 24 23:37:01.734173 containerd[1515]: time="2026-04-24T23:37:01.734140567Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:37:01.760281 containerd[1515]: time="2026-04-24T23:37:01.760232510Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265\"" Apr 24 23:37:01.761239 containerd[1515]: time="2026-04-24T23:37:01.760617286Z" level=info msg="StartContainer for \"b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265\"" Apr 24 23:37:01.787924 systemd[1]: Started cri-containerd-b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265.scope - libcontainer container b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265. Apr 24 23:37:01.813328 containerd[1515]: time="2026-04-24T23:37:01.813224427Z" level=info msg="StartContainer for \"b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265\" returns successfully" Apr 24 23:37:01.975825 kubelet[2559]: E0424 23:37:01.975689 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g5mf4" podUID="8788446e-4c60-4f01-947d-08e34daf3b75" Apr 24 23:37:02.311905 containerd[1515]: time="2026-04-24T23:37:02.311695483Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:37:02.314185 systemd[1]: cri-containerd-b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265.scope: Deactivated successfully. Apr 24 23:37:02.333505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265-rootfs.mount: Deactivated successfully. Apr 24 23:37:02.383173 kubelet[2559]: I0424 23:37:02.382676 2559 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:37:02.437546 systemd[1]: Created slice kubepods-burstable-pod10ad3905_de1e_4e02_b1ab_22144ab5e695.slice - libcontainer container kubepods-burstable-pod10ad3905_de1e_4e02_b1ab_22144ab5e695.slice. Apr 24 23:37:02.457609 systemd[1]: Created slice kubepods-burstable-podf7be2f51_9471_4c71_a057_7cb4caf6a49f.slice - libcontainer container kubepods-burstable-podf7be2f51_9471_4c71_a057_7cb4caf6a49f.slice. Apr 24 23:37:02.483604 kubelet[2559]: I0424 23:37:02.483564 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7be2f51-9471-4c71-a057-7cb4caf6a49f-config-volume\") pod \"coredns-674b8bbfcf-4tlwp\" (UID: \"f7be2f51-9471-4c71-a057-7cb4caf6a49f\") " pod="kube-system/coredns-674b8bbfcf-4tlwp" Apr 24 23:37:02.483604 kubelet[2559]: I0424 23:37:02.483598 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab76dadb-e21e-4b46-b8de-e6bfe6616944-tigera-ca-bundle\") pod \"calico-kube-controllers-7978df8fd-fbqhx\" (UID: \"ab76dadb-e21e-4b46-b8de-e6bfe6616944\") " pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" Apr 24 23:37:02.483604 kubelet[2559]: I0424 23:37:02.483612 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r64z2\" (UniqueName: \"kubernetes.io/projected/f7be2f51-9471-4c71-a057-7cb4caf6a49f-kube-api-access-r64z2\") pod \"coredns-674b8bbfcf-4tlwp\" (UID: \"f7be2f51-9471-4c71-a057-7cb4caf6a49f\") " pod="kube-system/coredns-674b8bbfcf-4tlwp" Apr 24 23:37:02.483852 kubelet[2559]: I0424 23:37:02.483625 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ft96\" (UniqueName: \"kubernetes.io/projected/10ad3905-de1e-4e02-b1ab-22144ab5e695-kube-api-access-4ft96\") pod \"coredns-674b8bbfcf-kfr8z\" (UID: \"10ad3905-de1e-4e02-b1ab-22144ab5e695\") " pod="kube-system/coredns-674b8bbfcf-kfr8z" Apr 24 23:37:02.483852 kubelet[2559]: I0424 23:37:02.483637 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10ad3905-de1e-4e02-b1ab-22144ab5e695-config-volume\") pod \"coredns-674b8bbfcf-kfr8z\" (UID: \"10ad3905-de1e-4e02-b1ab-22144ab5e695\") " pod="kube-system/coredns-674b8bbfcf-kfr8z" Apr 24 23:37:02.483852 kubelet[2559]: I0424 23:37:02.483650 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp55s\" (UniqueName: \"kubernetes.io/projected/ab76dadb-e21e-4b46-b8de-e6bfe6616944-kube-api-access-zp55s\") pod \"calico-kube-controllers-7978df8fd-fbqhx\" (UID: \"ab76dadb-e21e-4b46-b8de-e6bfe6616944\") " pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" Apr 24 23:37:02.511181 systemd[1]: Created slice kubepods-besteffort-pod09d823c4_6db7_41e1_b24a_cd89a1522a77.slice - libcontainer container kubepods-besteffort-pod09d823c4_6db7_41e1_b24a_cd89a1522a77.slice. Apr 24 23:37:02.511785 containerd[1515]: time="2026-04-24T23:37:02.511361054Z" level=info msg="shim disconnected" id=b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265 namespace=k8s.io Apr 24 23:37:02.511785 containerd[1515]: time="2026-04-24T23:37:02.511568254Z" level=warning msg="cleaning up after shim disconnected" id=b3789646f8c8f7b6d0e7051f7e59ca85c562fa2a40f5fc732ce104e9a2108265 namespace=k8s.io Apr 24 23:37:02.511785 containerd[1515]: time="2026-04-24T23:37:02.511576237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:37:02.522601 systemd[1]: Created slice kubepods-besteffort-pod8a74c825_4dc2_43a3_8973_667ef8645b56.slice - libcontainer container kubepods-besteffort-pod8a74c825_4dc2_43a3_8973_667ef8645b56.slice. Apr 24 23:37:02.535311 systemd[1]: Created slice kubepods-besteffort-pod13f36578_f50e_4376_ab00_85fdce7a7d13.slice - libcontainer container kubepods-besteffort-pod13f36578_f50e_4376_ab00_85fdce7a7d13.slice. Apr 24 23:37:02.543227 systemd[1]: Created slice kubepods-besteffort-podcb3eda9a_d9f6_480e_8b39_2bd662594f1a.slice - libcontainer container kubepods-besteffort-podcb3eda9a_d9f6_480e_8b39_2bd662594f1a.slice. Apr 24 23:37:02.550925 systemd[1]: Created slice kubepods-besteffort-podab76dadb_e21e_4b46_b8de_e6bfe6616944.slice - libcontainer container kubepods-besteffort-podab76dadb_e21e_4b46_b8de_e6bfe6616944.slice. Apr 24 23:37:02.584180 kubelet[2559]: I0424 23:37:02.584078 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgdrc\" (UniqueName: \"kubernetes.io/projected/09d823c4-6db7-41e1-b24a-cd89a1522a77-kube-api-access-kgdrc\") pod \"calico-apiserver-956fb69fd-pkvxp\" (UID: \"09d823c4-6db7-41e1-b24a-cd89a1522a77\") " pod="calico-system/calico-apiserver-956fb69fd-pkvxp" Apr 24 23:37:02.584908 kubelet[2559]: I0424 23:37:02.584888 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-nginx-config\") pod \"whisker-645dbcb77-rtzrr\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " pod="calico-system/whisker-645dbcb77-rtzrr" Apr 24 23:37:02.584969 kubelet[2559]: I0424 23:37:02.584914 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3eda9a-d9f6-480e-8b39-2bd662594f1a-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-v245s\" (UID: \"cb3eda9a-d9f6-480e-8b39-2bd662594f1a\") " pod="calico-system/goldmane-5b85766d88-v245s" Apr 24 23:37:02.584969 kubelet[2559]: I0424 23:37:02.584926 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxtgb\" (UniqueName: \"kubernetes.io/projected/cb3eda9a-d9f6-480e-8b39-2bd662594f1a-kube-api-access-dxtgb\") pod \"goldmane-5b85766d88-v245s\" (UID: \"cb3eda9a-d9f6-480e-8b39-2bd662594f1a\") " pod="calico-system/goldmane-5b85766d88-v245s" Apr 24 23:37:02.584969 kubelet[2559]: I0424 23:37:02.584937 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflcp\" (UniqueName: \"kubernetes.io/projected/8a74c825-4dc2-43a3-8973-667ef8645b56-kube-api-access-bflcp\") pod \"whisker-645dbcb77-rtzrr\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " pod="calico-system/whisker-645dbcb77-rtzrr" Apr 24 23:37:02.584969 kubelet[2559]: I0424 23:37:02.584962 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/13f36578-f50e-4376-ab00-85fdce7a7d13-calico-apiserver-certs\") pod \"calico-apiserver-956fb69fd-vsnpb\" (UID: \"13f36578-f50e-4376-ab00-85fdce7a7d13\") " pod="calico-system/calico-apiserver-956fb69fd-vsnpb" Apr 24 23:37:02.585047 kubelet[2559]: I0424 23:37:02.584975 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3eda9a-d9f6-480e-8b39-2bd662594f1a-config\") pod \"goldmane-5b85766d88-v245s\" (UID: \"cb3eda9a-d9f6-480e-8b39-2bd662594f1a\") " pod="calico-system/goldmane-5b85766d88-v245s" Apr 24 23:37:02.585047 kubelet[2559]: I0424 23:37:02.584986 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cb3eda9a-d9f6-480e-8b39-2bd662594f1a-goldmane-key-pair\") pod \"goldmane-5b85766d88-v245s\" (UID: \"cb3eda9a-d9f6-480e-8b39-2bd662594f1a\") " pod="calico-system/goldmane-5b85766d88-v245s" Apr 24 23:37:02.585047 kubelet[2559]: I0424 23:37:02.584998 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-ca-bundle\") pod \"whisker-645dbcb77-rtzrr\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " pod="calico-system/whisker-645dbcb77-rtzrr" Apr 24 23:37:02.585047 kubelet[2559]: I0424 23:37:02.585009 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09d823c4-6db7-41e1-b24a-cd89a1522a77-calico-apiserver-certs\") pod \"calico-apiserver-956fb69fd-pkvxp\" (UID: \"09d823c4-6db7-41e1-b24a-cd89a1522a77\") " pod="calico-system/calico-apiserver-956fb69fd-pkvxp" Apr 24 23:37:02.585047 kubelet[2559]: I0424 23:37:02.585019 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-backend-key-pair\") pod \"whisker-645dbcb77-rtzrr\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " pod="calico-system/whisker-645dbcb77-rtzrr" Apr 24 23:37:02.585131 kubelet[2559]: I0424 23:37:02.585041 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bcx6\" (UniqueName: \"kubernetes.io/projected/13f36578-f50e-4376-ab00-85fdce7a7d13-kube-api-access-6bcx6\") pod \"calico-apiserver-956fb69fd-vsnpb\" (UID: \"13f36578-f50e-4376-ab00-85fdce7a7d13\") " pod="calico-system/calico-apiserver-956fb69fd-vsnpb" Apr 24 23:37:02.742837 containerd[1515]: time="2026-04-24T23:37:02.742786865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfr8z,Uid:10ad3905-de1e-4e02-b1ab-22144ab5e695,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:02.763265 containerd[1515]: time="2026-04-24T23:37:02.763189849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4tlwp,Uid:f7be2f51-9471-4c71-a057-7cb4caf6a49f,Namespace:kube-system,Attempt:0,}" Apr 24 23:37:02.820817 containerd[1515]: time="2026-04-24T23:37:02.820653088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-pkvxp,Uid:09d823c4-6db7-41e1-b24a-cd89a1522a77,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:02.833363 containerd[1515]: time="2026-04-24T23:37:02.833127952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645dbcb77-rtzrr,Uid:8a74c825-4dc2-43a3-8973-667ef8645b56,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:02.841891 containerd[1515]: time="2026-04-24T23:37:02.841790889Z" level=error msg="Failed to destroy network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.842447 containerd[1515]: time="2026-04-24T23:37:02.842273245Z" level=error msg="encountered an error cleaning up failed sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.842447 containerd[1515]: time="2026-04-24T23:37:02.842315019Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4tlwp,Uid:f7be2f51-9471-4c71-a057-7cb4caf6a49f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.842518 kubelet[2559]: E0424 23:37:02.842496 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.842568 kubelet[2559]: E0424 23:37:02.842556 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4tlwp" Apr 24 23:37:02.842609 kubelet[2559]: E0424 23:37:02.842576 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4tlwp" Apr 24 23:37:02.842718 kubelet[2559]: E0424 23:37:02.842636 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4tlwp_kube-system(f7be2f51-9471-4c71-a057-7cb4caf6a49f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4tlwp_kube-system(f7be2f51-9471-4c71-a057-7cb4caf6a49f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4tlwp" podUID="f7be2f51-9471-4c71-a057-7cb4caf6a49f" Apr 24 23:37:02.847012 containerd[1515]: time="2026-04-24T23:37:02.846985463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-vsnpb,Uid:13f36578-f50e-4376-ab00-85fdce7a7d13,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:02.848604 containerd[1515]: time="2026-04-24T23:37:02.848494206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-v245s,Uid:cb3eda9a-d9f6-480e-8b39-2bd662594f1a,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:02.849710 containerd[1515]: time="2026-04-24T23:37:02.849686690Z" level=error msg="Failed to destroy network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.850006 containerd[1515]: time="2026-04-24T23:37:02.849984170Z" level=error msg="encountered an error cleaning up failed sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.850043 containerd[1515]: time="2026-04-24T23:37:02.850018083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfr8z,Uid:10ad3905-de1e-4e02-b1ab-22144ab5e695,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.850187 kubelet[2559]: E0424 23:37:02.850154 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.850218 kubelet[2559]: E0424 23:37:02.850186 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kfr8z" Apr 24 23:37:02.850218 kubelet[2559]: E0424 23:37:02.850201 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kfr8z" Apr 24 23:37:02.850250 kubelet[2559]: E0424 23:37:02.850232 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kfr8z_kube-system(10ad3905-de1e-4e02-b1ab-22144ab5e695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kfr8z_kube-system(10ad3905-de1e-4e02-b1ab-22144ab5e695)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kfr8z" podUID="10ad3905-de1e-4e02-b1ab-22144ab5e695" Apr 24 23:37:02.853812 containerd[1515]: time="2026-04-24T23:37:02.853699616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7978df8fd-fbqhx,Uid:ab76dadb-e21e-4b46-b8de-e6bfe6616944,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:02.925980 containerd[1515]: time="2026-04-24T23:37:02.925505156Z" level=error msg="Failed to destroy network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.925980 containerd[1515]: time="2026-04-24T23:37:02.925840215Z" level=error msg="encountered an error cleaning up failed sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.925980 containerd[1515]: time="2026-04-24T23:37:02.925876410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-pkvxp,Uid:09d823c4-6db7-41e1-b24a-cd89a1522a77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.926159 kubelet[2559]: E0424 23:37:02.926057 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.926159 kubelet[2559]: E0424 23:37:02.926104 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-956fb69fd-pkvxp" Apr 24 23:37:02.926159 kubelet[2559]: E0424 23:37:02.926123 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-956fb69fd-pkvxp" Apr 24 23:37:02.926231 kubelet[2559]: E0424 23:37:02.926162 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-956fb69fd-pkvxp_calico-system(09d823c4-6db7-41e1-b24a-cd89a1522a77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-956fb69fd-pkvxp_calico-system(09d823c4-6db7-41e1-b24a-cd89a1522a77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-956fb69fd-pkvxp" podUID="09d823c4-6db7-41e1-b24a-cd89a1522a77" Apr 24 23:37:02.934507 containerd[1515]: time="2026-04-24T23:37:02.934475831Z" level=error msg="Failed to destroy network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.934893 containerd[1515]: time="2026-04-24T23:37:02.934874807Z" level=error msg="encountered an error cleaning up failed sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.934985 containerd[1515]: time="2026-04-24T23:37:02.934966930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-645dbcb77-rtzrr,Uid:8a74c825-4dc2-43a3-8973-667ef8645b56,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.935206 kubelet[2559]: E0424 23:37:02.935166 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.935251 kubelet[2559]: E0424 23:37:02.935215 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-645dbcb77-rtzrr" Apr 24 23:37:02.935251 kubelet[2559]: E0424 23:37:02.935230 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-645dbcb77-rtzrr" Apr 24 23:37:02.935288 kubelet[2559]: E0424 23:37:02.935266 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-645dbcb77-rtzrr_calico-system(8a74c825-4dc2-43a3-8973-667ef8645b56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-645dbcb77-rtzrr_calico-system(8a74c825-4dc2-43a3-8973-667ef8645b56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-645dbcb77-rtzrr" podUID="8a74c825-4dc2-43a3-8973-667ef8645b56" Apr 24 23:37:02.953671 containerd[1515]: time="2026-04-24T23:37:02.953617794Z" level=error msg="Failed to destroy network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.954159 containerd[1515]: time="2026-04-24T23:37:02.954133642Z" level=error msg="encountered an error cleaning up failed sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.954208 containerd[1515]: time="2026-04-24T23:37:02.954179212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-v245s,Uid:cb3eda9a-d9f6-480e-8b39-2bd662594f1a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.954392 kubelet[2559]: E0424 23:37:02.954358 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.954546 kubelet[2559]: E0424 23:37:02.954411 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-v245s" Apr 24 23:37:02.954546 kubelet[2559]: E0424 23:37:02.954431 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-v245s" Apr 24 23:37:02.954546 kubelet[2559]: E0424 23:37:02.954475 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-v245s_calico-system(cb3eda9a-d9f6-480e-8b39-2bd662594f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-v245s_calico-system(cb3eda9a-d9f6-480e-8b39-2bd662594f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-v245s" podUID="cb3eda9a-d9f6-480e-8b39-2bd662594f1a" Apr 24 23:37:02.973917 containerd[1515]: time="2026-04-24T23:37:02.973869114Z" level=error msg="Failed to destroy network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.974473 containerd[1515]: time="2026-04-24T23:37:02.974415028Z" level=error msg="encountered an error cleaning up failed sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.974553 containerd[1515]: time="2026-04-24T23:37:02.974456923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-vsnpb,Uid:13f36578-f50e-4376-ab00-85fdce7a7d13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.974834 kubelet[2559]: E0424 23:37:02.974773 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.974880 kubelet[2559]: E0424 23:37:02.974852 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-956fb69fd-vsnpb" Apr 24 23:37:02.974880 kubelet[2559]: E0424 23:37:02.974872 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-956fb69fd-vsnpb" Apr 24 23:37:02.974965 kubelet[2559]: E0424 23:37:02.974934 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-956fb69fd-vsnpb_calico-system(13f36578-f50e-4376-ab00-85fdce7a7d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-956fb69fd-vsnpb_calico-system(13f36578-f50e-4376-ab00-85fdce7a7d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-956fb69fd-vsnpb" podUID="13f36578-f50e-4376-ab00-85fdce7a7d13" Apr 24 23:37:02.976661 containerd[1515]: time="2026-04-24T23:37:02.976636134Z" level=error msg="Failed to destroy network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.977689 containerd[1515]: time="2026-04-24T23:37:02.977588026Z" level=error msg="encountered an error cleaning up failed sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.977689 containerd[1515]: time="2026-04-24T23:37:02.977623770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7978df8fd-fbqhx,Uid:ab76dadb-e21e-4b46-b8de-e6bfe6616944,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.977798 kubelet[2559]: E0424 23:37:02.977700 2559 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:02.977798 kubelet[2559]: E0424 23:37:02.977736 2559 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" Apr 24 23:37:02.977798 kubelet[2559]: E0424 23:37:02.977753 2559 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" Apr 24 23:37:02.978202 kubelet[2559]: E0424 23:37:02.977781 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7978df8fd-fbqhx_calico-system(ab76dadb-e21e-4b46-b8de-e6bfe6616944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7978df8fd-fbqhx_calico-system(ab76dadb-e21e-4b46-b8de-e6bfe6616944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" podUID="ab76dadb-e21e-4b46-b8de-e6bfe6616944" Apr 24 23:37:03.078034 kubelet[2559]: I0424 23:37:03.077983 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:03.079781 containerd[1515]: time="2026-04-24T23:37:03.079484086Z" level=info msg="StopPodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\"" Apr 24 23:37:03.080555 containerd[1515]: time="2026-04-24T23:37:03.080247194Z" level=info msg="Ensure that sandbox fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3 in task-service has been cleanup successfully" Apr 24 23:37:03.082577 kubelet[2559]: I0424 23:37:03.082379 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:03.087074 kubelet[2559]: I0424 23:37:03.087043 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:03.087251 containerd[1515]: time="2026-04-24T23:37:03.087080312Z" level=info msg="StopPodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\"" Apr 24 23:37:03.087347 containerd[1515]: time="2026-04-24T23:37:03.087316826Z" level=info msg="Ensure that sandbox 9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788 in task-service has been cleanup successfully" Apr 24 23:37:03.089097 containerd[1515]: time="2026-04-24T23:37:03.088927673Z" level=info msg="StopPodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\"" Apr 24 23:37:03.089870 containerd[1515]: time="2026-04-24T23:37:03.089615615Z" level=info msg="Ensure that sandbox 152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6 in task-service has been cleanup successfully" Apr 24 23:37:03.097884 kubelet[2559]: I0424 23:37:03.096021 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:03.102319 containerd[1515]: time="2026-04-24T23:37:03.101614227Z" level=info msg="StopPodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\"" Apr 24 23:37:03.102319 containerd[1515]: time="2026-04-24T23:37:03.101789978Z" level=info msg="Ensure that sandbox 8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998 in task-service has been cleanup successfully" Apr 24 23:37:03.127336 kubelet[2559]: I0424 23:37:03.126966 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:03.128797 containerd[1515]: time="2026-04-24T23:37:03.128237243Z" level=info msg="StopPodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\"" Apr 24 23:37:03.128797 containerd[1515]: time="2026-04-24T23:37:03.128428560Z" level=info msg="Ensure that sandbox d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea in task-service has been cleanup successfully" Apr 24 23:37:03.136175 containerd[1515]: time="2026-04-24T23:37:03.136098719Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:37:03.136373 kubelet[2559]: I0424 23:37:03.136297 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:03.138745 containerd[1515]: time="2026-04-24T23:37:03.138441315Z" level=info msg="StopPodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\"" Apr 24 23:37:03.138745 containerd[1515]: time="2026-04-24T23:37:03.138556042Z" level=info msg="Ensure that sandbox 37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f in task-service has been cleanup successfully" Apr 24 23:37:03.154617 kubelet[2559]: I0424 23:37:03.154590 2559 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:03.157960 containerd[1515]: time="2026-04-24T23:37:03.157922057Z" level=info msg="StopPodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\"" Apr 24 23:37:03.158081 containerd[1515]: time="2026-04-24T23:37:03.158066390Z" level=info msg="Ensure that sandbox 0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d in task-service has been cleanup successfully" Apr 24 23:37:03.169933 containerd[1515]: time="2026-04-24T23:37:03.169900959Z" level=info msg="CreateContainer within sandbox \"405e586c3a6cad839122bfc58b0e2b3170617523bd6cea1f70a7faba9f98bcfe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9fb8f33907e959fd3c94efafae4b3e15140df9813f7e01da6f90bf6331cd692c\"" Apr 24 23:37:03.170897 containerd[1515]: time="2026-04-24T23:37:03.170873490Z" level=info msg="StartContainer for \"9fb8f33907e959fd3c94efafae4b3e15140df9813f7e01da6f90bf6331cd692c\"" Apr 24 23:37:03.185272 containerd[1515]: time="2026-04-24T23:37:03.184361561Z" level=error msg="StopPodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" failed" error="failed to destroy network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.185370 kubelet[2559]: E0424 23:37:03.184635 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:03.185370 kubelet[2559]: E0424 23:37:03.184678 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788"} Apr 24 23:37:03.185370 kubelet[2559]: E0424 23:37:03.184721 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"13f36578-f50e-4376-ab00-85fdce7a7d13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.185370 kubelet[2559]: E0424 23:37:03.184741 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"13f36578-f50e-4376-ab00-85fdce7a7d13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-956fb69fd-vsnpb" podUID="13f36578-f50e-4376-ab00-85fdce7a7d13" Apr 24 23:37:03.192018 containerd[1515]: time="2026-04-24T23:37:03.191728132Z" level=error msg="StopPodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" failed" error="failed to destroy network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.193345 kubelet[2559]: E0424 23:37:03.192193 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:03.193345 kubelet[2559]: E0424 23:37:03.192222 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3"} Apr 24 23:37:03.193345 kubelet[2559]: E0424 23:37:03.192243 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb3eda9a-d9f6-480e-8b39-2bd662594f1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.193345 kubelet[2559]: E0424 23:37:03.192259 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb3eda9a-d9f6-480e-8b39-2bd662594f1a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-v245s" podUID="cb3eda9a-d9f6-480e-8b39-2bd662594f1a" Apr 24 23:37:03.217730 containerd[1515]: time="2026-04-24T23:37:03.217683839Z" level=error msg="StopPodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" failed" error="failed to destroy network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.218116 kubelet[2559]: E0424 23:37:03.218063 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:03.218166 kubelet[2559]: E0424 23:37:03.218116 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998"} Apr 24 23:37:03.218166 kubelet[2559]: E0424 23:37:03.218143 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10ad3905-de1e-4e02-b1ab-22144ab5e695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.218262 kubelet[2559]: E0424 23:37:03.218163 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10ad3905-de1e-4e02-b1ab-22144ab5e695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kfr8z" podUID="10ad3905-de1e-4e02-b1ab-22144ab5e695" Apr 24 23:37:03.222109 containerd[1515]: time="2026-04-24T23:37:03.222085344Z" level=error msg="StopPodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" failed" error="failed to destroy network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.222356 kubelet[2559]: E0424 23:37:03.222326 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:03.222415 kubelet[2559]: E0424 23:37:03.222367 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea"} Apr 24 23:37:03.222415 kubelet[2559]: E0424 23:37:03.222389 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8a74c825-4dc2-43a3-8973-667ef8645b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.222415 kubelet[2559]: E0424 23:37:03.222408 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8a74c825-4dc2-43a3-8973-667ef8645b56\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-645dbcb77-rtzrr" podUID="8a74c825-4dc2-43a3-8973-667ef8645b56" Apr 24 23:37:03.229289 containerd[1515]: time="2026-04-24T23:37:03.227892938Z" level=error msg="StopPodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" failed" error="failed to destroy network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.229393 kubelet[2559]: E0424 23:37:03.228030 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:03.229393 kubelet[2559]: E0424 23:37:03.228061 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6"} Apr 24 23:37:03.229393 kubelet[2559]: E0424 23:37:03.228087 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09d823c4-6db7-41e1-b24a-cd89a1522a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.229393 kubelet[2559]: E0424 23:37:03.228104 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09d823c4-6db7-41e1-b24a-cd89a1522a77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-956fb69fd-pkvxp" podUID="09d823c4-6db7-41e1-b24a-cd89a1522a77" Apr 24 23:37:03.233840 containerd[1515]: time="2026-04-24T23:37:03.232704436Z" level=error msg="StopPodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" failed" error="failed to destroy network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.233929 kubelet[2559]: E0424 23:37:03.232976 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:03.233929 kubelet[2559]: E0424 23:37:03.233026 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f"} Apr 24 23:37:03.233929 kubelet[2559]: E0424 23:37:03.233047 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7be2f51-9471-4c71-a057-7cb4caf6a49f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.233929 kubelet[2559]: E0424 23:37:03.233065 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7be2f51-9471-4c71-a057-7cb4caf6a49f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4tlwp" podUID="f7be2f51-9471-4c71-a057-7cb4caf6a49f" Apr 24 23:37:03.235588 systemd[1]: Started cri-containerd-9fb8f33907e959fd3c94efafae4b3e15140df9813f7e01da6f90bf6331cd692c.scope - libcontainer container 9fb8f33907e959fd3c94efafae4b3e15140df9813f7e01da6f90bf6331cd692c. Apr 24 23:37:03.243954 containerd[1515]: time="2026-04-24T23:37:03.243884614Z" level=error msg="StopPodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" failed" error="failed to destroy network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:37:03.244277 kubelet[2559]: E0424 23:37:03.244064 2559 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:03.244277 kubelet[2559]: E0424 23:37:03.244100 2559 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d"} Apr 24 23:37:03.244277 kubelet[2559]: E0424 23:37:03.244122 2559 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab76dadb-e21e-4b46-b8de-e6bfe6616944\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:37:03.244277 kubelet[2559]: E0424 23:37:03.244142 2559 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab76dadb-e21e-4b46-b8de-e6bfe6616944\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" podUID="ab76dadb-e21e-4b46-b8de-e6bfe6616944" Apr 24 23:37:03.272641 containerd[1515]: time="2026-04-24T23:37:03.272606311Z" level=info msg="StartContainer for \"9fb8f33907e959fd3c94efafae4b3e15140df9813f7e01da6f90bf6331cd692c\" returns successfully" Apr 24 23:37:03.770686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6-shm.mount: Deactivated successfully. Apr 24 23:37:03.770989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f-shm.mount: Deactivated successfully. Apr 24 23:37:03.771181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998-shm.mount: Deactivated successfully. Apr 24 23:37:03.985508 systemd[1]: Created slice kubepods-besteffort-pod8788446e_4c60_4f01_947d_08e34daf3b75.slice - libcontainer container kubepods-besteffort-pod8788446e_4c60_4f01_947d_08e34daf3b75.slice. Apr 24 23:37:03.989968 containerd[1515]: time="2026-04-24T23:37:03.989881082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5mf4,Uid:8788446e-4c60-4f01-947d-08e34daf3b75,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:04.119891 systemd-networkd[1399]: cali9d09f1f1a88: Link UP Apr 24 23:37:04.120413 systemd-networkd[1399]: cali9d09f1f1a88: Gained carrier Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.046 [ERROR][3786] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.060 [INFO][3786] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0 csi-node-driver- calico-system 8788446e-4c60-4f01-947d-08e34daf3b75 719 0 2026-04-24 23:36:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f csi-node-driver-g5mf4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9d09f1f1a88 [] [] }} ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.060 [INFO][3786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.081 [INFO][3798] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" HandleID="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Workload="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.086 [INFO][3798] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" HandleID="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Workload="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"csi-node-driver-g5mf4", "timestamp":"2026-04-24 23:37:04.08182916 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d1080)} Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.086 [INFO][3798] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.086 [INFO][3798] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.086 [INFO][3798] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.088 [INFO][3798] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.092 [INFO][3798] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.096 [INFO][3798] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.097 [INFO][3798] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.099 [INFO][3798] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.099 [INFO][3798] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.100 [INFO][3798] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8 Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.104 [INFO][3798] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.107 [INFO][3798] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.193/26] block=192.168.119.192/26 handle="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.107 [INFO][3798] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.193/26] handle="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.107 [INFO][3798] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:04.135787 containerd[1515]: 2026-04-24 23:37:04.108 [INFO][3798] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.193/26] IPv6=[] ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" HandleID="k8s-pod-network.654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Workload="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.137078 containerd[1515]: 2026-04-24 23:37:04.111 [INFO][3786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8788446e-4c60-4f01-947d-08e34daf3b75", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"csi-node-driver-g5mf4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d09f1f1a88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:04.137078 containerd[1515]: 2026-04-24 23:37:04.111 [INFO][3786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.193/32] ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.137078 containerd[1515]: 2026-04-24 23:37:04.111 [INFO][3786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d09f1f1a88 ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.137078 containerd[1515]: 2026-04-24 23:37:04.119 [INFO][3786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.137078 containerd[1515]: 2026-04-24 23:37:04.120 [INFO][3786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8788446e-4c60-4f01-947d-08e34daf3b75", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8", Pod:"csi-node-driver-g5mf4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.119.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d09f1f1a88", MAC:"f2:94:0f:1b:5b:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:04.137078 containerd[1515]: 2026-04-24 23:37:04.131 [INFO][3786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8" Namespace="calico-system" Pod="csi-node-driver-g5mf4" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-csi--node--driver--g5mf4-eth0" Apr 24 23:37:04.153261 containerd[1515]: time="2026-04-24T23:37:04.153201562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:04.155817 containerd[1515]: time="2026-04-24T23:37:04.153738810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:04.155817 containerd[1515]: time="2026-04-24T23:37:04.153755265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:04.155817 containerd[1515]: time="2026-04-24T23:37:04.153872255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:04.171036 containerd[1515]: time="2026-04-24T23:37:04.171000799Z" level=info msg="StopPodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\"" Apr 24 23:37:04.190954 systemd[1]: Started cri-containerd-654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8.scope - libcontainer container 654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8. Apr 24 23:37:04.203774 kubelet[2559]: I0424 23:37:04.203721 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j25x6" podStartSLOduration=3.360096408 podStartE2EDuration="16.203706537s" podCreationTimestamp="2026-04-24 23:36:48 +0000 UTC" firstStartedPulling="2026-04-24 23:36:48.886731581 +0000 UTC m=+15.992633660" lastFinishedPulling="2026-04-24 23:37:01.7303417 +0000 UTC m=+28.836243789" observedRunningTime="2026-04-24 23:37:04.196523149 +0000 UTC m=+31.302425228" watchObservedRunningTime="2026-04-24 23:37:04.203706537 +0000 UTC m=+31.309608616" Apr 24 23:37:04.231394 containerd[1515]: time="2026-04-24T23:37:04.231292086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g5mf4,Uid:8788446e-4c60-4f01-947d-08e34daf3b75,Namespace:calico-system,Attempt:0,} returns sandbox id \"654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8\"" Apr 24 23:37:04.232908 containerd[1515]: time="2026-04-24T23:37:04.232870789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.224 [INFO][3849] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.224 [INFO][3849] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" iface="eth0" netns="/var/run/netns/cni-68ef4452-3754-289b-2259-66ee8beb9c80" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.226 [INFO][3849] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" iface="eth0" netns="/var/run/netns/cni-68ef4452-3754-289b-2259-66ee8beb9c80" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.226 [INFO][3849] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" iface="eth0" netns="/var/run/netns/cni-68ef4452-3754-289b-2259-66ee8beb9c80" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.226 [INFO][3849] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.226 [INFO][3849] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.244 [INFO][3864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.244 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.244 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.249 [WARNING][3864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.249 [INFO][3864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.250 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:04.254275 containerd[1515]: 2026-04-24 23:37:04.252 [INFO][3849] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:04.254798 containerd[1515]: time="2026-04-24T23:37:04.254643149Z" level=info msg="TearDown network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" successfully" Apr 24 23:37:04.254798 containerd[1515]: time="2026-04-24T23:37:04.254666836Z" level=info msg="StopPodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" returns successfully" Apr 24 23:37:04.256759 systemd[1]: run-netns-cni\x2d68ef4452\x2d3754\x2d289b\x2d2259\x2d66ee8beb9c80.mount: Deactivated successfully. Apr 24 23:37:04.303027 kubelet[2559]: I0424 23:37:04.302970 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-backend-key-pair\") pod \"8a74c825-4dc2-43a3-8973-667ef8645b56\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " Apr 24 23:37:04.303027 kubelet[2559]: I0424 23:37:04.303028 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-ca-bundle\") pod \"8a74c825-4dc2-43a3-8973-667ef8645b56\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " Apr 24 23:37:04.303181 kubelet[2559]: I0424 23:37:04.303043 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-nginx-config\") pod \"8a74c825-4dc2-43a3-8973-667ef8645b56\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " Apr 24 23:37:04.303181 kubelet[2559]: I0424 23:37:04.303062 2559 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bflcp\" (UniqueName: \"kubernetes.io/projected/8a74c825-4dc2-43a3-8973-667ef8645b56-kube-api-access-bflcp\") pod \"8a74c825-4dc2-43a3-8973-667ef8645b56\" (UID: \"8a74c825-4dc2-43a3-8973-667ef8645b56\") " Apr 24 23:37:04.303866 kubelet[2559]: I0424 23:37:04.303641 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8a74c825-4dc2-43a3-8973-667ef8645b56" (UID: "8a74c825-4dc2-43a3-8973-667ef8645b56"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:37:04.304298 kubelet[2559]: I0424 23:37:04.304277 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "8a74c825-4dc2-43a3-8973-667ef8645b56" (UID: "8a74c825-4dc2-43a3-8973-667ef8645b56"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:37:04.306611 kubelet[2559]: I0424 23:37:04.306548 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a74c825-4dc2-43a3-8973-667ef8645b56-kube-api-access-bflcp" (OuterVolumeSpecName: "kube-api-access-bflcp") pod "8a74c825-4dc2-43a3-8973-667ef8645b56" (UID: "8a74c825-4dc2-43a3-8973-667ef8645b56"). InnerVolumeSpecName "kube-api-access-bflcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:37:04.306680 kubelet[2559]: I0424 23:37:04.306613 2559 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8a74c825-4dc2-43a3-8973-667ef8645b56" (UID: "8a74c825-4dc2-43a3-8973-667ef8645b56"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:37:04.403981 kubelet[2559]: I0424 23:37:04.403727 2559 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-ca-bundle\") on node \"ci-4081-3-6-n-61b787660f\" DevicePath \"\"" Apr 24 23:37:04.403981 kubelet[2559]: I0424 23:37:04.403771 2559 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/8a74c825-4dc2-43a3-8973-667ef8645b56-nginx-config\") on node \"ci-4081-3-6-n-61b787660f\" DevicePath \"\"" Apr 24 23:37:04.403981 kubelet[2559]: I0424 23:37:04.403789 2559 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bflcp\" (UniqueName: \"kubernetes.io/projected/8a74c825-4dc2-43a3-8973-667ef8645b56-kube-api-access-bflcp\") on node \"ci-4081-3-6-n-61b787660f\" DevicePath \"\"" Apr 24 23:37:04.403981 kubelet[2559]: I0424 23:37:04.403838 2559 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8a74c825-4dc2-43a3-8973-667ef8645b56-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-61b787660f\" DevicePath \"\"" Apr 24 23:37:04.759306 systemd[1]: var-lib-kubelet-pods-8a74c825\x2d4dc2\x2d43a3\x2d8973\x2d667ef8645b56-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbflcp.mount: Deactivated successfully. Apr 24 23:37:04.759627 systemd[1]: var-lib-kubelet-pods-8a74c825\x2d4dc2\x2d43a3\x2d8973\x2d667ef8645b56-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:37:04.994160 systemd[1]: Removed slice kubepods-besteffort-pod8a74c825_4dc2_43a3_8973_667ef8645b56.slice - libcontainer container kubepods-besteffort-pod8a74c825_4dc2_43a3_8973_667ef8645b56.slice. Apr 24 23:37:05.175313 kubelet[2559]: I0424 23:37:05.173899 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:05.259217 systemd[1]: Created slice kubepods-besteffort-podaff942cf_e73c_4287_a0d6_21bcfe040702.slice - libcontainer container kubepods-besteffort-podaff942cf_e73c_4287_a0d6_21bcfe040702.slice. Apr 24 23:37:05.310415 kubelet[2559]: I0424 23:37:05.310312 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aff942cf-e73c-4287-a0d6-21bcfe040702-whisker-backend-key-pair\") pod \"whisker-7744dc9798-tms8c\" (UID: \"aff942cf-e73c-4287-a0d6-21bcfe040702\") " pod="calico-system/whisker-7744dc9798-tms8c" Apr 24 23:37:05.310415 kubelet[2559]: I0424 23:37:05.310394 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddtc\" (UniqueName: \"kubernetes.io/projected/aff942cf-e73c-4287-a0d6-21bcfe040702-kube-api-access-6ddtc\") pod \"whisker-7744dc9798-tms8c\" (UID: \"aff942cf-e73c-4287-a0d6-21bcfe040702\") " pod="calico-system/whisker-7744dc9798-tms8c" Apr 24 23:37:05.310415 kubelet[2559]: I0424 23:37:05.310428 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/aff942cf-e73c-4287-a0d6-21bcfe040702-nginx-config\") pod \"whisker-7744dc9798-tms8c\" (UID: \"aff942cf-e73c-4287-a0d6-21bcfe040702\") " pod="calico-system/whisker-7744dc9798-tms8c" Apr 24 23:37:05.311257 kubelet[2559]: I0424 23:37:05.310458 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff942cf-e73c-4287-a0d6-21bcfe040702-whisker-ca-bundle\") pod \"whisker-7744dc9798-tms8c\" (UID: \"aff942cf-e73c-4287-a0d6-21bcfe040702\") " pod="calico-system/whisker-7744dc9798-tms8c" Apr 24 23:37:05.360199 systemd-networkd[1399]: cali9d09f1f1a88: Gained IPv6LL Apr 24 23:37:05.564213 containerd[1515]: time="2026-04-24T23:37:05.563183813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7744dc9798-tms8c,Uid:aff942cf-e73c-4287-a0d6-21bcfe040702,Namespace:calico-system,Attempt:0,}" Apr 24 23:37:05.720058 systemd-networkd[1399]: cali764bccebc19: Link UP Apr 24 23:37:05.721650 systemd-networkd[1399]: cali764bccebc19: Gained carrier Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.632 [ERROR][3980] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.643 [INFO][3980] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0 whisker-7744dc9798- calico-system aff942cf-e73c-4287-a0d6-21bcfe040702 921 0 2026-04-24 23:37:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7744dc9798 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f whisker-7744dc9798-tms8c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali764bccebc19 [] [] }} ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.644 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.676 [INFO][3992] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" HandleID="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.681 [INFO][3992] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" HandleID="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f6110), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"whisker-7744dc9798-tms8c", "timestamp":"2026-04-24 23:37:05.676612655 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003b4000)} Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.682 [INFO][3992] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.682 [INFO][3992] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.682 [INFO][3992] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.684 [INFO][3992] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.689 [INFO][3992] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.693 [INFO][3992] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.695 [INFO][3992] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.696 [INFO][3992] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.696 [INFO][3992] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.698 [INFO][3992] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.701 [INFO][3992] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.705 [INFO][3992] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.194/26] block=192.168.119.192/26 handle="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.706 [INFO][3992] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.194/26] handle="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.706 [INFO][3992] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:05.737853 containerd[1515]: 2026-04-24 23:37:05.706 [INFO][3992] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.194/26] IPv6=[] ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" HandleID="k8s-pod-network.8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.738414 containerd[1515]: 2026-04-24 23:37:05.711 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0", GenerateName:"whisker-7744dc9798-", Namespace:"calico-system", SelfLink:"", UID:"aff942cf-e73c-4287-a0d6-21bcfe040702", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7744dc9798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"whisker-7744dc9798-tms8c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali764bccebc19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:05.738414 containerd[1515]: 2026-04-24 23:37:05.711 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.194/32] ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.738414 containerd[1515]: 2026-04-24 23:37:05.711 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali764bccebc19 ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.738414 containerd[1515]: 2026-04-24 23:37:05.722 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.738414 containerd[1515]: 2026-04-24 23:37:05.724 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0", GenerateName:"whisker-7744dc9798-", Namespace:"calico-system", SelfLink:"", UID:"aff942cf-e73c-4287-a0d6-21bcfe040702", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7744dc9798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc", Pod:"whisker-7744dc9798-tms8c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.119.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali764bccebc19", MAC:"06:01:02:96:f2:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:05.738414 containerd[1515]: 2026-04-24 23:37:05.733 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc" Namespace="calico-system" Pod="whisker-7744dc9798-tms8c" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--7744dc9798--tms8c-eth0" Apr 24 23:37:05.765141 containerd[1515]: time="2026-04-24T23:37:05.764484750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:05.765141 containerd[1515]: time="2026-04-24T23:37:05.764529197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:05.765141 containerd[1515]: time="2026-04-24T23:37:05.764853968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:05.765141 containerd[1515]: time="2026-04-24T23:37:05.764969745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:05.791082 systemd[1]: run-containerd-runc-k8s.io-8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc-runc.KyiviK.mount: Deactivated successfully. Apr 24 23:37:05.799074 systemd[1]: Started cri-containerd-8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc.scope - libcontainer container 8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc. Apr 24 23:37:05.874154 containerd[1515]: time="2026-04-24T23:37:05.874121557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7744dc9798-tms8c,Uid:aff942cf-e73c-4287-a0d6-21bcfe040702,Namespace:calico-system,Attempt:0,} returns sandbox id \"8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc\"" Apr 24 23:37:05.983273 containerd[1515]: time="2026-04-24T23:37:05.983206076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:05.984064 containerd[1515]: time="2026-04-24T23:37:05.983990393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 24 23:37:05.984997 containerd[1515]: time="2026-04-24T23:37:05.984762030Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:05.986461 containerd[1515]: time="2026-04-24T23:37:05.986416124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:05.987744 containerd[1515]: time="2026-04-24T23:37:05.986922974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.754026196s" Apr 24 23:37:05.987744 containerd[1515]: time="2026-04-24T23:37:05.986956375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 24 23:37:05.988292 containerd[1515]: time="2026-04-24T23:37:05.988268143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:37:05.991308 containerd[1515]: time="2026-04-24T23:37:05.991268650Z" level=info msg="CreateContainer within sandbox \"654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:37:06.015487 containerd[1515]: time="2026-04-24T23:37:06.015448534Z" level=info msg="CreateContainer within sandbox \"654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"beaa89aa3988c505cc6f794a64ddd2ad736d1237ed8f38d694df42f245f63eaa\"" Apr 24 23:37:06.016087 containerd[1515]: time="2026-04-24T23:37:06.015947600Z" level=info msg="StartContainer for \"beaa89aa3988c505cc6f794a64ddd2ad736d1237ed8f38d694df42f245f63eaa\"" Apr 24 23:37:06.040918 systemd[1]: Started cri-containerd-beaa89aa3988c505cc6f794a64ddd2ad736d1237ed8f38d694df42f245f63eaa.scope - libcontainer container beaa89aa3988c505cc6f794a64ddd2ad736d1237ed8f38d694df42f245f63eaa. Apr 24 23:37:06.075463 containerd[1515]: time="2026-04-24T23:37:06.075394745Z" level=info msg="StartContainer for \"beaa89aa3988c505cc6f794a64ddd2ad736d1237ed8f38d694df42f245f63eaa\" returns successfully" Apr 24 23:37:06.978625 kubelet[2559]: I0424 23:37:06.978585 2559 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a74c825-4dc2-43a3-8973-667ef8645b56" path="/var/lib/kubelet/pods/8a74c825-4dc2-43a3-8973-667ef8645b56/volumes" Apr 24 23:37:07.023962 systemd-networkd[1399]: cali764bccebc19: Gained IPv6LL Apr 24 23:37:07.956152 containerd[1515]: time="2026-04-24T23:37:07.956107076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:07.957682 containerd[1515]: time="2026-04-24T23:37:07.957209818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 24 23:37:07.959253 containerd[1515]: time="2026-04-24T23:37:07.958505545Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:07.960231 containerd[1515]: time="2026-04-24T23:37:07.960204020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:07.960790 containerd[1515]: time="2026-04-24T23:37:07.960774224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.972482475s" Apr 24 23:37:07.960921 containerd[1515]: time="2026-04-24T23:37:07.960831522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 24 23:37:07.962148 containerd[1515]: time="2026-04-24T23:37:07.962033306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:37:07.964680 containerd[1515]: time="2026-04-24T23:37:07.964595426Z" level=info msg="CreateContainer within sandbox \"8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:37:07.982830 containerd[1515]: time="2026-04-24T23:37:07.981446002Z" level=info msg="CreateContainer within sandbox \"8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ebe6eb38747a036f4d449da80b13ca5094653bc738d9eeb640f9954363fb5633\"" Apr 24 23:37:07.986734 containerd[1515]: time="2026-04-24T23:37:07.986715284Z" level=info msg="StartContainer for \"ebe6eb38747a036f4d449da80b13ca5094653bc738d9eeb640f9954363fb5633\"" Apr 24 23:37:08.037533 systemd[1]: run-containerd-runc-k8s.io-ebe6eb38747a036f4d449da80b13ca5094653bc738d9eeb640f9954363fb5633-runc.x1rWx6.mount: Deactivated successfully. Apr 24 23:37:08.053121 systemd[1]: Started cri-containerd-ebe6eb38747a036f4d449da80b13ca5094653bc738d9eeb640f9954363fb5633.scope - libcontainer container ebe6eb38747a036f4d449da80b13ca5094653bc738d9eeb640f9954363fb5633. Apr 24 23:37:08.113636 containerd[1515]: time="2026-04-24T23:37:08.113605866Z" level=info msg="StartContainer for \"ebe6eb38747a036f4d449da80b13ca5094653bc738d9eeb640f9954363fb5633\" returns successfully" Apr 24 23:37:09.576101 kubelet[2559]: I0424 23:37:09.575539 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:09.965052 containerd[1515]: time="2026-04-24T23:37:09.965010823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:09.967528 containerd[1515]: time="2026-04-24T23:37:09.967137956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 24 23:37:09.968291 containerd[1515]: time="2026-04-24T23:37:09.968264109Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:09.971627 containerd[1515]: time="2026-04-24T23:37:09.971601485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:09.973886 containerd[1515]: time="2026-04-24T23:37:09.973862493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.011809618s" Apr 24 23:37:09.973936 containerd[1515]: time="2026-04-24T23:37:09.973888934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 24 23:37:09.975017 containerd[1515]: time="2026-04-24T23:37:09.975003910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:37:09.978131 containerd[1515]: time="2026-04-24T23:37:09.978112645Z" level=info msg="CreateContainer within sandbox \"654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:37:10.000168 containerd[1515]: time="2026-04-24T23:37:10.000123747Z" level=info msg="CreateContainer within sandbox \"654daace64def3a811d50403bf974b824c91ed46cc902f57537bfdc72667ead8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9830c8500d7057837d4ca24cdc4de510d1c247c6591e6447e38cd0abe1ad27f6\"" Apr 24 23:37:10.001913 containerd[1515]: time="2026-04-24T23:37:10.000792490Z" level=info msg="StartContainer for \"9830c8500d7057837d4ca24cdc4de510d1c247c6591e6447e38cd0abe1ad27f6\"" Apr 24 23:37:10.029026 systemd[1]: run-containerd-runc-k8s.io-9830c8500d7057837d4ca24cdc4de510d1c247c6591e6447e38cd0abe1ad27f6-runc.YEoU2k.mount: Deactivated successfully. Apr 24 23:37:10.034893 systemd[1]: Started cri-containerd-9830c8500d7057837d4ca24cdc4de510d1c247c6591e6447e38cd0abe1ad27f6.scope - libcontainer container 9830c8500d7057837d4ca24cdc4de510d1c247c6591e6447e38cd0abe1ad27f6. Apr 24 23:37:10.058458 containerd[1515]: time="2026-04-24T23:37:10.058421284Z" level=info msg="StartContainer for \"9830c8500d7057837d4ca24cdc4de510d1c247c6591e6447e38cd0abe1ad27f6\" returns successfully" Apr 24 23:37:10.213995 kubelet[2559]: I0424 23:37:10.213886 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-g5mf4" podStartSLOduration=16.471675459 podStartE2EDuration="22.213872595s" podCreationTimestamp="2026-04-24 23:36:48 +0000 UTC" firstStartedPulling="2026-04-24 23:37:04.232231604 +0000 UTC m=+31.338133683" lastFinishedPulling="2026-04-24 23:37:09.97442873 +0000 UTC m=+37.080330819" observedRunningTime="2026-04-24 23:37:10.203447606 +0000 UTC m=+37.309349695" watchObservedRunningTime="2026-04-24 23:37:10.213872595 +0000 UTC m=+37.319774674" Apr 24 23:37:11.056785 kubelet[2559]: I0424 23:37:11.056730 2559 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:37:11.056785 kubelet[2559]: I0424 23:37:11.056786 2559 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:37:11.144839 kernel: calico-node[4285]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:37:11.561311 systemd-networkd[1399]: vxlan.calico: Link UP Apr 24 23:37:11.561318 systemd-networkd[1399]: vxlan.calico: Gained carrier Apr 24 23:37:12.253066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302105404.mount: Deactivated successfully. Apr 24 23:37:12.275703 containerd[1515]: time="2026-04-24T23:37:12.275657279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:12.277070 containerd[1515]: time="2026-04-24T23:37:12.277022901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 24 23:37:12.278445 containerd[1515]: time="2026-04-24T23:37:12.278120724Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:12.280530 containerd[1515]: time="2026-04-24T23:37:12.280509416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:12.281042 containerd[1515]: time="2026-04-24T23:37:12.281016319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.305957215s" Apr 24 23:37:12.281081 containerd[1515]: time="2026-04-24T23:37:12.281049219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 24 23:37:12.285740 containerd[1515]: time="2026-04-24T23:37:12.285708793Z" level=info msg="CreateContainer within sandbox \"8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:37:12.298380 containerd[1515]: time="2026-04-24T23:37:12.298341082Z" level=info msg="CreateContainer within sandbox \"8f1612aef9c224c2a107dbb50d86876d0b08fa548d4000af7ca07b2572d873cc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d288dd526e2becb7184b8e05af4f4e1823569a66e787896b4c68fe894a79b5eb\"" Apr 24 23:37:12.299349 containerd[1515]: time="2026-04-24T23:37:12.298984884Z" level=info msg="StartContainer for \"d288dd526e2becb7184b8e05af4f4e1823569a66e787896b4c68fe894a79b5eb\"" Apr 24 23:37:12.341931 systemd[1]: Started cri-containerd-d288dd526e2becb7184b8e05af4f4e1823569a66e787896b4c68fe894a79b5eb.scope - libcontainer container d288dd526e2becb7184b8e05af4f4e1823569a66e787896b4c68fe894a79b5eb. Apr 24 23:37:12.374335 containerd[1515]: time="2026-04-24T23:37:12.374295043Z" level=info msg="StartContainer for \"d288dd526e2becb7184b8e05af4f4e1823569a66e787896b4c68fe894a79b5eb\" returns successfully" Apr 24 23:37:13.226181 kubelet[2559]: I0424 23:37:13.225987 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7744dc9798-tms8c" podStartSLOduration=1.8233725459999999 podStartE2EDuration="8.225969143s" podCreationTimestamp="2026-04-24 23:37:05 +0000 UTC" firstStartedPulling="2026-04-24 23:37:05.879378444 +0000 UTC m=+32.985280523" lastFinishedPulling="2026-04-24 23:37:12.281975031 +0000 UTC m=+39.387877120" observedRunningTime="2026-04-24 23:37:13.225452225 +0000 UTC m=+40.331354334" watchObservedRunningTime="2026-04-24 23:37:13.225969143 +0000 UTC m=+40.331871252" Apr 24 23:37:13.424300 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Apr 24 23:37:13.977845 containerd[1515]: time="2026-04-24T23:37:13.977266002Z" level=info msg="StopPodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\"" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.060 [INFO][4450] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.060 [INFO][4450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" iface="eth0" netns="/var/run/netns/cni-2fc88621-e51e-8638-e190-510b75aaf54c" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.061 [INFO][4450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" iface="eth0" netns="/var/run/netns/cni-2fc88621-e51e-8638-e190-510b75aaf54c" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.061 [INFO][4450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" iface="eth0" netns="/var/run/netns/cni-2fc88621-e51e-8638-e190-510b75aaf54c" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.061 [INFO][4450] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.061 [INFO][4450] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.092 [INFO][4458] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.093 [INFO][4458] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.093 [INFO][4458] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.097 [WARNING][4458] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.097 [INFO][4458] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.098 [INFO][4458] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:14.107834 containerd[1515]: 2026-04-24 23:37:14.102 [INFO][4450] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:14.108373 containerd[1515]: time="2026-04-24T23:37:14.108246475Z" level=info msg="TearDown network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" successfully" Apr 24 23:37:14.108373 containerd[1515]: time="2026-04-24T23:37:14.108271985Z" level=info msg="StopPodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" returns successfully" Apr 24 23:37:14.108939 containerd[1515]: time="2026-04-24T23:37:14.108925659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-pkvxp,Uid:09d823c4-6db7-41e1-b24a-cd89a1522a77,Namespace:calico-system,Attempt:1,}" Apr 24 23:37:14.109371 systemd[1]: run-netns-cni\x2d2fc88621\x2de51e\x2d8638\x2de190\x2d510b75aaf54c.mount: Deactivated successfully. Apr 24 23:37:14.205969 systemd-networkd[1399]: cali2f7ac6a0e52: Link UP Apr 24 23:37:14.206191 systemd-networkd[1399]: cali2f7ac6a0e52: Gained carrier Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.150 [INFO][4465] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0 calico-apiserver-956fb69fd- calico-system 09d823c4-6db7-41e1-b24a-cd89a1522a77 975 0 2026-04-24 23:36:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:956fb69fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f calico-apiserver-956fb69fd-pkvxp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2f7ac6a0e52 [] [] }} ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.150 [INFO][4465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.169 [INFO][4477] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" HandleID="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.175 [INFO][4477] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" HandleID="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277390), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"calico-apiserver-956fb69fd-pkvxp", "timestamp":"2026-04-24 23:37:14.169028697 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00042ef20)} Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.175 [INFO][4477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.175 [INFO][4477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.175 [INFO][4477] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.177 [INFO][4477] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.181 [INFO][4477] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.185 [INFO][4477] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.186 [INFO][4477] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.188 [INFO][4477] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.188 [INFO][4477] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.189 [INFO][4477] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176 Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.193 [INFO][4477] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.197 [INFO][4477] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.195/26] block=192.168.119.192/26 handle="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.197 [INFO][4477] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.195/26] handle="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.197 [INFO][4477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:14.229865 containerd[1515]: 2026-04-24 23:37:14.198 [INFO][4477] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.195/26] IPv6=[] ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" HandleID="k8s-pod-network.c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.230545 containerd[1515]: 2026-04-24 23:37:14.201 [INFO][4465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"09d823c4-6db7-41e1-b24a-cd89a1522a77", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"calico-apiserver-956fb69fd-pkvxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2f7ac6a0e52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:14.230545 containerd[1515]: 2026-04-24 23:37:14.201 [INFO][4465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.195/32] ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.230545 containerd[1515]: 2026-04-24 23:37:14.202 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f7ac6a0e52 ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.230545 containerd[1515]: 2026-04-24 23:37:14.208 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.230545 containerd[1515]: 2026-04-24 23:37:14.209 [INFO][4465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"09d823c4-6db7-41e1-b24a-cd89a1522a77", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176", Pod:"calico-apiserver-956fb69fd-pkvxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2f7ac6a0e52", MAC:"1a:0f:1f:02:3a:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:14.230545 containerd[1515]: 2026-04-24 23:37:14.223 [INFO][4465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-pkvxp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:14.247076 containerd[1515]: time="2026-04-24T23:37:14.246227739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:14.247076 containerd[1515]: time="2026-04-24T23:37:14.246281271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:14.247076 containerd[1515]: time="2026-04-24T23:37:14.246291376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:14.247076 containerd[1515]: time="2026-04-24T23:37:14.246360051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:14.270930 systemd[1]: Started cri-containerd-c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176.scope - libcontainer container c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176. Apr 24 23:37:14.310385 containerd[1515]: time="2026-04-24T23:37:14.310339607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-pkvxp,Uid:09d823c4-6db7-41e1-b24a-cd89a1522a77,Namespace:calico-system,Attempt:1,} returns sandbox id \"c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176\"" Apr 24 23:37:14.311847 containerd[1515]: time="2026-04-24T23:37:14.311827952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:37:14.977980 containerd[1515]: time="2026-04-24T23:37:14.977697753Z" level=info msg="StopPodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\"" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.023 [INFO][4556] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.024 [INFO][4556] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" iface="eth0" netns="/var/run/netns/cni-2a593a62-37e9-3b8c-252a-fad3c48d1c0a" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.024 [INFO][4556] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" iface="eth0" netns="/var/run/netns/cni-2a593a62-37e9-3b8c-252a-fad3c48d1c0a" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.025 [INFO][4556] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" iface="eth0" netns="/var/run/netns/cni-2a593a62-37e9-3b8c-252a-fad3c48d1c0a" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.025 [INFO][4556] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.025 [INFO][4556] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.057 [INFO][4563] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.058 [INFO][4563] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.058 [INFO][4563] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.063 [WARNING][4563] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.063 [INFO][4563] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.065 [INFO][4563] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:15.069449 containerd[1515]: 2026-04-24 23:37:15.067 [INFO][4556] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:15.070215 containerd[1515]: time="2026-04-24T23:37:15.069715433Z" level=info msg="TearDown network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" successfully" Apr 24 23:37:15.070215 containerd[1515]: time="2026-04-24T23:37:15.069745828Z" level=info msg="StopPodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" returns successfully" Apr 24 23:37:15.070407 containerd[1515]: time="2026-04-24T23:37:15.070382487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-vsnpb,Uid:13f36578-f50e-4376-ab00-85fdce7a7d13,Namespace:calico-system,Attempt:1,}" Apr 24 23:37:15.110345 systemd[1]: run-netns-cni\x2d2a593a62\x2d37e9\x2d3b8c\x2d252a\x2dfad3c48d1c0a.mount: Deactivated successfully. Apr 24 23:37:15.156078 systemd-networkd[1399]: calia20395b46ff: Link UP Apr 24 23:37:15.157097 systemd-networkd[1399]: calia20395b46ff: Gained carrier Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.104 [INFO][4569] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0 calico-apiserver-956fb69fd- calico-system 13f36578-f50e-4376-ab00-85fdce7a7d13 987 0 2026-04-24 23:36:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:956fb69fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f calico-apiserver-956fb69fd-vsnpb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia20395b46ff [] [] }} ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.104 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.128 [INFO][4582] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" HandleID="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.132 [INFO][4582] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" HandleID="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"calico-apiserver-956fb69fd-vsnpb", "timestamp":"2026-04-24 23:37:15.128046876 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112840)} Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.132 [INFO][4582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.132 [INFO][4582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.132 [INFO][4582] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.134 [INFO][4582] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.137 [INFO][4582] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.140 [INFO][4582] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.141 [INFO][4582] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.142 [INFO][4582] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.142 [INFO][4582] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.144 [INFO][4582] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.147 [INFO][4582] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.151 [INFO][4582] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.196/26] block=192.168.119.192/26 handle="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.151 [INFO][4582] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.196/26] handle="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.151 [INFO][4582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:15.170592 containerd[1515]: 2026-04-24 23:37:15.151 [INFO][4582] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.196/26] IPv6=[] ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" HandleID="k8s-pod-network.0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.171299 containerd[1515]: 2026-04-24 23:37:15.153 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"13f36578-f50e-4376-ab00-85fdce7a7d13", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"calico-apiserver-956fb69fd-vsnpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia20395b46ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:15.171299 containerd[1515]: 2026-04-24 23:37:15.153 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.196/32] ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.171299 containerd[1515]: 2026-04-24 23:37:15.153 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia20395b46ff ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.171299 containerd[1515]: 2026-04-24 23:37:15.157 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.171299 containerd[1515]: 2026-04-24 23:37:15.157 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"13f36578-f50e-4376-ab00-85fdce7a7d13", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc", Pod:"calico-apiserver-956fb69fd-vsnpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia20395b46ff", MAC:"2a:0a:d2:c7:64:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:15.171299 containerd[1515]: 2026-04-24 23:37:15.166 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc" Namespace="calico-system" Pod="calico-apiserver-956fb69fd-vsnpb" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:15.191379 containerd[1515]: time="2026-04-24T23:37:15.190758640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:15.191379 containerd[1515]: time="2026-04-24T23:37:15.191094341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:15.191379 containerd[1515]: time="2026-04-24T23:37:15.191144277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:15.191379 containerd[1515]: time="2026-04-24T23:37:15.191271240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:15.219925 systemd[1]: Started cri-containerd-0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc.scope - libcontainer container 0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc. Apr 24 23:37:15.255026 containerd[1515]: time="2026-04-24T23:37:15.254997376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-956fb69fd-vsnpb,Uid:13f36578-f50e-4376-ab00-85fdce7a7d13,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc\"" Apr 24 23:37:15.976794 containerd[1515]: time="2026-04-24T23:37:15.976672536Z" level=info msg="StopPodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\"" Apr 24 23:37:16.049605 systemd-networkd[1399]: cali2f7ac6a0e52: Gained IPv6LL Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.022 [INFO][4660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.022 [INFO][4660] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" iface="eth0" netns="/var/run/netns/cni-c7c5e8d9-c6a0-2343-3e6a-cec8afef8da0" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.026 [INFO][4660] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" iface="eth0" netns="/var/run/netns/cni-c7c5e8d9-c6a0-2343-3e6a-cec8afef8da0" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.026 [INFO][4660] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" iface="eth0" netns="/var/run/netns/cni-c7c5e8d9-c6a0-2343-3e6a-cec8afef8da0" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.026 [INFO][4660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.026 [INFO][4660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.065 [INFO][4667] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.065 [INFO][4667] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.065 [INFO][4667] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.072 [WARNING][4667] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.072 [INFO][4667] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.074 [INFO][4667] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:16.092749 containerd[1515]: 2026-04-24 23:37:16.076 [INFO][4660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:16.093256 containerd[1515]: time="2026-04-24T23:37:16.092858385Z" level=info msg="TearDown network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" successfully" Apr 24 23:37:16.093256 containerd[1515]: time="2026-04-24T23:37:16.092878705Z" level=info msg="StopPodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" returns successfully" Apr 24 23:37:16.095880 containerd[1515]: time="2026-04-24T23:37:16.093463314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7978df8fd-fbqhx,Uid:ab76dadb-e21e-4b46-b8de-e6bfe6616944,Namespace:calico-system,Attempt:1,}" Apr 24 23:37:16.096381 systemd[1]: run-netns-cni\x2dc7c5e8d9\x2dc6a0\x2d2343\x2d3e6a\x2dcec8afef8da0.mount: Deactivated successfully. Apr 24 23:37:16.183696 systemd-networkd[1399]: calif3946601103: Link UP Apr 24 23:37:16.183896 systemd-networkd[1399]: calif3946601103: Gained carrier Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.133 [INFO][4677] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0 calico-kube-controllers-7978df8fd- calico-system ab76dadb-e21e-4b46-b8de-e6bfe6616944 993 0 2026-04-24 23:36:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7978df8fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f calico-kube-controllers-7978df8fd-fbqhx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif3946601103 [] [] }} ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.133 [INFO][4677] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.151 [INFO][4689] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" HandleID="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.157 [INFO][4689] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" HandleID="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"calico-kube-controllers-7978df8fd-fbqhx", "timestamp":"2026-04-24 23:37:16.151971073 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000546160)} Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.157 [INFO][4689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.157 [INFO][4689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.157 [INFO][4689] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.159 [INFO][4689] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.162 [INFO][4689] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.166 [INFO][4689] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.167 [INFO][4689] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.169 [INFO][4689] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.169 [INFO][4689] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.170 [INFO][4689] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.173 [INFO][4689] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.177 [INFO][4689] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.197/26] block=192.168.119.192/26 handle="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.177 [INFO][4689] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.197/26] handle="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.177 [INFO][4689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:16.199042 containerd[1515]: 2026-04-24 23:37:16.177 [INFO][4689] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.197/26] IPv6=[] ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" HandleID="k8s-pod-network.c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.199472 containerd[1515]: 2026-04-24 23:37:16.180 [INFO][4677] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0", GenerateName:"calico-kube-controllers-7978df8fd-", Namespace:"calico-system", SelfLink:"", UID:"ab76dadb-e21e-4b46-b8de-e6bfe6616944", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7978df8fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"calico-kube-controllers-7978df8fd-fbqhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3946601103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:16.199472 containerd[1515]: 2026-04-24 23:37:16.180 [INFO][4677] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.197/32] ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.199472 containerd[1515]: 2026-04-24 23:37:16.180 [INFO][4677] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3946601103 ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.199472 containerd[1515]: 2026-04-24 23:37:16.183 [INFO][4677] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.199472 containerd[1515]: 2026-04-24 23:37:16.184 [INFO][4677] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0", GenerateName:"calico-kube-controllers-7978df8fd-", Namespace:"calico-system", SelfLink:"", UID:"ab76dadb-e21e-4b46-b8de-e6bfe6616944", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7978df8fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce", Pod:"calico-kube-controllers-7978df8fd-fbqhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3946601103", MAC:"a2:75:2c:12:a7:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:16.199472 containerd[1515]: 2026-04-24 23:37:16.192 [INFO][4677] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce" Namespace="calico-system" Pod="calico-kube-controllers-7978df8fd-fbqhx" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:16.216615 containerd[1515]: time="2026-04-24T23:37:16.216450554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:16.216725 containerd[1515]: time="2026-04-24T23:37:16.216639130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:16.216725 containerd[1515]: time="2026-04-24T23:37:16.216661675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:16.217264 containerd[1515]: time="2026-04-24T23:37:16.217219692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:16.232946 systemd[1]: Started cri-containerd-c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce.scope - libcontainer container c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce. Apr 24 23:37:16.270222 containerd[1515]: time="2026-04-24T23:37:16.270165202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7978df8fd-fbqhx,Uid:ab76dadb-e21e-4b46-b8de-e6bfe6616944,Namespace:calico-system,Attempt:1,} returns sandbox id \"c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce\"" Apr 24 23:37:16.304516 systemd-networkd[1399]: calia20395b46ff: Gained IPv6LL Apr 24 23:37:16.978277 containerd[1515]: time="2026-04-24T23:37:16.977267545Z" level=info msg="StopPodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\"" Apr 24 23:37:16.979745 containerd[1515]: time="2026-04-24T23:37:16.979341246Z" level=info msg="StopPodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\"" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.039 [INFO][4795] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.039 [INFO][4795] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" iface="eth0" netns="/var/run/netns/cni-12435f94-c397-8465-f72e-360eb488c6cc" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.040 [INFO][4795] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" iface="eth0" netns="/var/run/netns/cni-12435f94-c397-8465-f72e-360eb488c6cc" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.041 [INFO][4795] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" iface="eth0" netns="/var/run/netns/cni-12435f94-c397-8465-f72e-360eb488c6cc" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.041 [INFO][4795] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.041 [INFO][4795] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.091 [INFO][4807] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.091 [INFO][4807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.091 [INFO][4807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.098 [WARNING][4807] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.098 [INFO][4807] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.100 [INFO][4807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:17.110133 containerd[1515]: 2026-04-24 23:37:17.104 [INFO][4795] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:17.114171 containerd[1515]: time="2026-04-24T23:37:17.113864409Z" level=info msg="TearDown network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" successfully" Apr 24 23:37:17.114171 containerd[1515]: time="2026-04-24T23:37:17.113895065Z" level=info msg="StopPodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" returns successfully" Apr 24 23:37:17.114439 containerd[1515]: time="2026-04-24T23:37:17.114413381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-v245s,Uid:cb3eda9a-d9f6-480e-8b39-2bd662594f1a,Namespace:calico-system,Attempt:1,}" Apr 24 23:37:17.115945 systemd[1]: run-netns-cni\x2d12435f94\x2dc397\x2d8465\x2df72e\x2d360eb488c6cc.mount: Deactivated successfully. Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.060 [INFO][4794] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.060 [INFO][4794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" iface="eth0" netns="/var/run/netns/cni-fe1c0da4-0359-ad70-e576-4f5d06680db4" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.060 [INFO][4794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" iface="eth0" netns="/var/run/netns/cni-fe1c0da4-0359-ad70-e576-4f5d06680db4" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.060 [INFO][4794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" iface="eth0" netns="/var/run/netns/cni-fe1c0da4-0359-ad70-e576-4f5d06680db4" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.060 [INFO][4794] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.060 [INFO][4794] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.102 [INFO][4812] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.102 [INFO][4812] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.102 [INFO][4812] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.109 [WARNING][4812] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.109 [INFO][4812] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.111 [INFO][4812] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:17.117264 containerd[1515]: 2026-04-24 23:37:17.114 [INFO][4794] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:17.120793 containerd[1515]: time="2026-04-24T23:37:17.120769019Z" level=info msg="TearDown network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" successfully" Apr 24 23:37:17.120793 containerd[1515]: time="2026-04-24T23:37:17.120789019Z" level=info msg="StopPodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" returns successfully" Apr 24 23:37:17.121624 containerd[1515]: time="2026-04-24T23:37:17.121599219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4tlwp,Uid:f7be2f51-9471-4c71-a057-7cb4caf6a49f,Namespace:kube-system,Attempt:1,}" Apr 24 23:37:17.124822 systemd[1]: run-netns-cni\x2dfe1c0da4\x2d0359\x2dad70\x2de576\x2d4f5d06680db4.mount: Deactivated successfully. Apr 24 23:37:17.284476 systemd-networkd[1399]: cali9afae3c7d3c: Link UP Apr 24 23:37:17.287622 systemd-networkd[1399]: cali9afae3c7d3c: Gained carrier Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.204 [INFO][4823] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0 goldmane-5b85766d88- calico-system cb3eda9a-d9f6-480e-8b39-2bd662594f1a 1003 0 2026-04-24 23:36:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f goldmane-5b85766d88-v245s eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9afae3c7d3c [] [] }} ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.205 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.233 [INFO][4848] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" HandleID="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.241 [INFO][4848] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" HandleID="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"goldmane-5b85766d88-v245s", "timestamp":"2026-04-24 23:37:17.233526117 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000362dc0)} Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.242 [INFO][4848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.242 [INFO][4848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.242 [INFO][4848] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.244 [INFO][4848] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.250 [INFO][4848] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.257 [INFO][4848] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.258 [INFO][4848] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.260 [INFO][4848] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.260 [INFO][4848] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.262 [INFO][4848] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.267 [INFO][4848] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.273 [INFO][4848] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.198/26] block=192.168.119.192/26 handle="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.273 [INFO][4848] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.198/26] handle="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.273 [INFO][4848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:17.310249 containerd[1515]: 2026-04-24 23:37:17.273 [INFO][4848] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.198/26] IPv6=[] ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" HandleID="k8s-pod-network.97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.310998 containerd[1515]: 2026-04-24 23:37:17.278 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb3eda9a-d9f6-480e-8b39-2bd662594f1a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"goldmane-5b85766d88-v245s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9afae3c7d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:17.310998 containerd[1515]: 2026-04-24 23:37:17.278 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.198/32] ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.310998 containerd[1515]: 2026-04-24 23:37:17.278 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9afae3c7d3c ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.310998 containerd[1515]: 2026-04-24 23:37:17.287 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.310998 containerd[1515]: 2026-04-24 23:37:17.289 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb3eda9a-d9f6-480e-8b39-2bd662594f1a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba", Pod:"goldmane-5b85766d88-v245s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9afae3c7d3c", MAC:"56:af:0d:c8:d5:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:17.310998 containerd[1515]: 2026-04-24 23:37:17.304 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba" Namespace="calico-system" Pod="goldmane-5b85766d88-v245s" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:17.345945 containerd[1515]: time="2026-04-24T23:37:17.345271840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:17.345945 containerd[1515]: time="2026-04-24T23:37:17.345526287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:17.345945 containerd[1515]: time="2026-04-24T23:37:17.345650746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:17.345945 containerd[1515]: time="2026-04-24T23:37:17.345846182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:17.395108 systemd[1]: Started cri-containerd-97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba.scope - libcontainer container 97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba. Apr 24 23:37:17.402895 systemd-networkd[1399]: cali9c56cdfa323: Link UP Apr 24 23:37:17.403086 systemd-networkd[1399]: cali9c56cdfa323: Gained carrier Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.200 [INFO][4820] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0 coredns-674b8bbfcf- kube-system f7be2f51-9471-4c71-a057-7cb4caf6a49f 1004 0 2026-04-24 23:36:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f coredns-674b8bbfcf-4tlwp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9c56cdfa323 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.200 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.262 [INFO][4844] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" HandleID="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.273 [INFO][4844] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" HandleID="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276230), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"coredns-674b8bbfcf-4tlwp", "timestamp":"2026-04-24 23:37:17.262874316 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000406420)} Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.273 [INFO][4844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.275 [INFO][4844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.275 [INFO][4844] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.347 [INFO][4844] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.356 [INFO][4844] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.369 [INFO][4844] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.372 [INFO][4844] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.375 [INFO][4844] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.375 [INFO][4844] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.377 [INFO][4844] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96 Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.383 [INFO][4844] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.391 [INFO][4844] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.199/26] block=192.168.119.192/26 handle="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.391 [INFO][4844] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.199/26] handle="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.391 [INFO][4844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:17.423544 containerd[1515]: 2026-04-24 23:37:17.391 [INFO][4844] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.199/26] IPv6=[] ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" HandleID="k8s-pod-network.5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.424949 containerd[1515]: 2026-04-24 23:37:17.398 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f7be2f51-9471-4c71-a057-7cb4caf6a49f", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"coredns-674b8bbfcf-4tlwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c56cdfa323", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:17.424949 containerd[1515]: 2026-04-24 23:37:17.398 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.199/32] ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.424949 containerd[1515]: 2026-04-24 23:37:17.398 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9c56cdfa323 ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.424949 containerd[1515]: 2026-04-24 23:37:17.401 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.424949 containerd[1515]: 2026-04-24 23:37:17.402 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f7be2f51-9471-4c71-a057-7cb4caf6a49f", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96", Pod:"coredns-674b8bbfcf-4tlwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c56cdfa323", MAC:"3e:26:2e:f6:1c:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:17.424949 containerd[1515]: 2026-04-24 23:37:17.417 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96" Namespace="kube-system" Pod="coredns-674b8bbfcf-4tlwp" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:17.472959 containerd[1515]: time="2026-04-24T23:37:17.472903147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-v245s,Uid:cb3eda9a-d9f6-480e-8b39-2bd662594f1a,Namespace:calico-system,Attempt:1,} returns sandbox id \"97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba\"" Apr 24 23:37:17.481579 containerd[1515]: time="2026-04-24T23:37:17.480468601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:17.481579 containerd[1515]: time="2026-04-24T23:37:17.481030333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:17.481579 containerd[1515]: time="2026-04-24T23:37:17.481498444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:17.482196 containerd[1515]: time="2026-04-24T23:37:17.481570654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:17.523264 systemd[1]: Started cri-containerd-5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96.scope - libcontainer container 5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96. Apr 24 23:37:17.578365 containerd[1515]: time="2026-04-24T23:37:17.578324580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4tlwp,Uid:f7be2f51-9471-4c71-a057-7cb4caf6a49f,Namespace:kube-system,Attempt:1,} returns sandbox id \"5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96\"" Apr 24 23:37:17.591824 containerd[1515]: time="2026-04-24T23:37:17.591766424Z" level=info msg="CreateContainer within sandbox \"5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:37:17.606863 containerd[1515]: time="2026-04-24T23:37:17.606600271Z" level=info msg="CreateContainer within sandbox \"5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba6c0962bab987e91f253b180c87a3dea97ee40faed7a66bc16683e420cb7fe2\"" Apr 24 23:37:17.607240 containerd[1515]: time="2026-04-24T23:37:17.607215345Z" level=info msg="StartContainer for \"ba6c0962bab987e91f253b180c87a3dea97ee40faed7a66bc16683e420cb7fe2\"" Apr 24 23:37:17.637945 systemd[1]: Started cri-containerd-ba6c0962bab987e91f253b180c87a3dea97ee40faed7a66bc16683e420cb7fe2.scope - libcontainer container ba6c0962bab987e91f253b180c87a3dea97ee40faed7a66bc16683e420cb7fe2. Apr 24 23:37:17.671945 containerd[1515]: time="2026-04-24T23:37:17.671775412Z" level=info msg="StartContainer for \"ba6c0962bab987e91f253b180c87a3dea97ee40faed7a66bc16683e420cb7fe2\" returns successfully" Apr 24 23:37:17.940511 kubelet[2559]: I0424 23:37:17.940462 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:17.958833 containerd[1515]: time="2026-04-24T23:37:17.958081216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:17.959461 containerd[1515]: time="2026-04-24T23:37:17.959430254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 24 23:37:17.960512 containerd[1515]: time="2026-04-24T23:37:17.960496914Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:17.962573 containerd[1515]: time="2026-04-24T23:37:17.962557472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:17.963403 containerd[1515]: time="2026-04-24T23:37:17.963360931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.650668344s" Apr 24 23:37:17.963442 containerd[1515]: time="2026-04-24T23:37:17.963412439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:37:17.966655 containerd[1515]: time="2026-04-24T23:37:17.966207071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:37:17.970325 containerd[1515]: time="2026-04-24T23:37:17.970301757Z" level=info msg="CreateContainer within sandbox \"c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:37:17.987565 containerd[1515]: time="2026-04-24T23:37:17.987534365Z" level=info msg="StopPodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\"" Apr 24 23:37:17.990183 containerd[1515]: time="2026-04-24T23:37:17.990093940Z" level=info msg="CreateContainer within sandbox \"c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2f3b81664cb43155906bc62c800ab6aabdd89d1c2afd428ef837a5c7800c57f1\"" Apr 24 23:37:17.991363 containerd[1515]: time="2026-04-24T23:37:17.991326672Z" level=info msg="StartContainer for \"2f3b81664cb43155906bc62c800ab6aabdd89d1c2afd428ef837a5c7800c57f1\"" Apr 24 23:37:18.031979 systemd-networkd[1399]: calif3946601103: Gained IPv6LL Apr 24 23:37:18.061521 systemd[1]: Started cri-containerd-2f3b81664cb43155906bc62c800ab6aabdd89d1c2afd428ef837a5c7800c57f1.scope - libcontainer container 2f3b81664cb43155906bc62c800ab6aabdd89d1c2afd428ef837a5c7800c57f1. Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.037 [INFO][5052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.037 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" iface="eth0" netns="/var/run/netns/cni-14a0f406-5644-e564-a577-23542626cb0a" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.038 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" iface="eth0" netns="/var/run/netns/cni-14a0f406-5644-e564-a577-23542626cb0a" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.043 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" iface="eth0" netns="/var/run/netns/cni-14a0f406-5644-e564-a577-23542626cb0a" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.043 [INFO][5052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.043 [INFO][5052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.083 [INFO][5078] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.084 [INFO][5078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.084 [INFO][5078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.094 [WARNING][5078] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.094 [INFO][5078] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.098 [INFO][5078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:18.107704 containerd[1515]: 2026-04-24 23:37:18.103 [INFO][5052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:18.108401 containerd[1515]: time="2026-04-24T23:37:18.108339211Z" level=info msg="TearDown network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" successfully" Apr 24 23:37:18.108401 containerd[1515]: time="2026-04-24T23:37:18.108362857Z" level=info msg="StopPodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" returns successfully" Apr 24 23:37:18.111487 containerd[1515]: time="2026-04-24T23:37:18.111193280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfr8z,Uid:10ad3905-de1e-4e02-b1ab-22144ab5e695,Namespace:kube-system,Attempt:1,}" Apr 24 23:37:18.148485 containerd[1515]: time="2026-04-24T23:37:18.148456373Z" level=info msg="StartContainer for \"2f3b81664cb43155906bc62c800ab6aabdd89d1c2afd428ef837a5c7800c57f1\" returns successfully" Apr 24 23:37:18.224691 systemd[1]: run-netns-cni\x2d14a0f406\x2d5644\x2de564\x2da577\x2d23542626cb0a.mount: Deactivated successfully. Apr 24 23:37:18.235722 systemd-networkd[1399]: calie0155b8aeee: Link UP Apr 24 23:37:18.236611 systemd-networkd[1399]: calie0155b8aeee: Gained carrier Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.169 [INFO][5114] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0 coredns-674b8bbfcf- kube-system 10ad3905-de1e-4e02-b1ab-22144ab5e695 1018 0 2026-04-24 23:36:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-61b787660f coredns-674b8bbfcf-kfr8z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0155b8aeee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.169 [INFO][5114] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.196 [INFO][5137] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" HandleID="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.200 [INFO][5137] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" HandleID="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f94b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-61b787660f", "pod":"coredns-674b8bbfcf-kfr8z", "timestamp":"2026-04-24 23:37:18.196302107 +0000 UTC"}, Hostname:"ci-4081-3-6-n-61b787660f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000441080)} Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.200 [INFO][5137] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.201 [INFO][5137] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.201 [INFO][5137] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-61b787660f' Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.202 [INFO][5137] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.206 [INFO][5137] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.212 [INFO][5137] ipam/ipam.go 526: Trying affinity for 192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.214 [INFO][5137] ipam/ipam.go 160: Attempting to load block cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.216 [INFO][5137] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.119.192/26 host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.216 [INFO][5137] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.119.192/26 handle="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.217 [INFO][5137] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9 Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.222 [INFO][5137] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.119.192/26 handle="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.228 [INFO][5137] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.119.200/26] block=192.168.119.192/26 handle="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.228 [INFO][5137] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.119.200/26] handle="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" host="ci-4081-3-6-n-61b787660f" Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.228 [INFO][5137] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:18.265225 containerd[1515]: 2026-04-24 23:37:18.229 [INFO][5137] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.119.200/26] IPv6=[] ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" HandleID="k8s-pod-network.74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.265669 containerd[1515]: 2026-04-24 23:37:18.232 [INFO][5114] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"10ad3905-de1e-4e02-b1ab-22144ab5e695", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"", Pod:"coredns-674b8bbfcf-kfr8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0155b8aeee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:18.265669 containerd[1515]: 2026-04-24 23:37:18.232 [INFO][5114] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.119.200/32] ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.265669 containerd[1515]: 2026-04-24 23:37:18.232 [INFO][5114] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0155b8aeee ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.265669 containerd[1515]: 2026-04-24 23:37:18.239 [INFO][5114] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.265669 containerd[1515]: 2026-04-24 23:37:18.242 [INFO][5114] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"10ad3905-de1e-4e02-b1ab-22144ab5e695", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9", Pod:"coredns-674b8bbfcf-kfr8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0155b8aeee", MAC:"72:2c:47:e3:2e:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:18.265669 containerd[1515]: 2026-04-24 23:37:18.256 [INFO][5114] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-kfr8z" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:18.298307 kubelet[2559]: I0424 23:37:18.297615 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-956fb69fd-pkvxp" podStartSLOduration=27.643060035 podStartE2EDuration="31.297602414s" podCreationTimestamp="2026-04-24 23:36:47 +0000 UTC" firstStartedPulling="2026-04-24 23:37:14.311483217 +0000 UTC m=+41.417385296" lastFinishedPulling="2026-04-24 23:37:17.966025586 +0000 UTC m=+45.071927675" observedRunningTime="2026-04-24 23:37:18.293191523 +0000 UTC m=+45.399093602" watchObservedRunningTime="2026-04-24 23:37:18.297602414 +0000 UTC m=+45.403504503" Apr 24 23:37:18.298307 kubelet[2559]: I0424 23:37:18.297758 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4tlwp" podStartSLOduration=40.297754925 podStartE2EDuration="40.297754925s" podCreationTimestamp="2026-04-24 23:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:18.269835336 +0000 UTC m=+45.375737425" watchObservedRunningTime="2026-04-24 23:37:18.297754925 +0000 UTC m=+45.403657004" Apr 24 23:37:18.300344 containerd[1515]: time="2026-04-24T23:37:18.300039801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:37:18.300344 containerd[1515]: time="2026-04-24T23:37:18.300083337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:37:18.300344 containerd[1515]: time="2026-04-24T23:37:18.300093323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:18.300532 containerd[1515]: time="2026-04-24T23:37:18.300462462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:37:18.334994 systemd[1]: Started cri-containerd-74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9.scope - libcontainer container 74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9. Apr 24 23:37:18.372484 containerd[1515]: time="2026-04-24T23:37:18.372449826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kfr8z,Uid:10ad3905-de1e-4e02-b1ab-22144ab5e695,Namespace:kube-system,Attempt:1,} returns sandbox id \"74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9\"" Apr 24 23:37:18.379733 containerd[1515]: time="2026-04-24T23:37:18.379666703Z" level=info msg="CreateContainer within sandbox \"74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:37:18.396093 containerd[1515]: time="2026-04-24T23:37:18.395761515Z" level=info msg="CreateContainer within sandbox \"74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07e161f3948f43774fbc8952f61099f661f3a167c6edd1b3bf43a57caca61921\"" Apr 24 23:37:18.397864 containerd[1515]: time="2026-04-24T23:37:18.397382314Z" level=info msg="StartContainer for \"07e161f3948f43774fbc8952f61099f661f3a167c6edd1b3bf43a57caca61921\"" Apr 24 23:37:18.431957 systemd[1]: Started cri-containerd-07e161f3948f43774fbc8952f61099f661f3a167c6edd1b3bf43a57caca61921.scope - libcontainer container 07e161f3948f43774fbc8952f61099f661f3a167c6edd1b3bf43a57caca61921. Apr 24 23:37:18.463084 containerd[1515]: time="2026-04-24T23:37:18.463049931Z" level=info msg="StartContainer for \"07e161f3948f43774fbc8952f61099f661f3a167c6edd1b3bf43a57caca61921\" returns successfully" Apr 24 23:37:18.463994 containerd[1515]: time="2026-04-24T23:37:18.463343927Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:18.464792 containerd[1515]: time="2026-04-24T23:37:18.464755458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:37:18.466909 containerd[1515]: time="2026-04-24T23:37:18.466873762Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 500.641993ms" Apr 24 23:37:18.466909 containerd[1515]: time="2026-04-24T23:37:18.466900933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 24 23:37:18.468929 containerd[1515]: time="2026-04-24T23:37:18.467843042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:37:18.471402 containerd[1515]: time="2026-04-24T23:37:18.471377473Z" level=info msg="CreateContainer within sandbox \"0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:37:18.491054 containerd[1515]: time="2026-04-24T23:37:18.490239775Z" level=info msg="CreateContainer within sandbox \"0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5241f2d9a0e36735f295b6152803b7359854d4fdf3f210fa30a88f1a1198c8a\"" Apr 24 23:37:18.491733 containerd[1515]: time="2026-04-24T23:37:18.491708042Z" level=info msg="StartContainer for \"b5241f2d9a0e36735f295b6152803b7359854d4fdf3f210fa30a88f1a1198c8a\"" Apr 24 23:37:18.519959 systemd[1]: Started cri-containerd-b5241f2d9a0e36735f295b6152803b7359854d4fdf3f210fa30a88f1a1198c8a.scope - libcontainer container b5241f2d9a0e36735f295b6152803b7359854d4fdf3f210fa30a88f1a1198c8a. Apr 24 23:37:18.544494 systemd-networkd[1399]: cali9c56cdfa323: Gained IPv6LL Apr 24 23:37:18.567636 containerd[1515]: time="2026-04-24T23:37:18.567579847Z" level=info msg="StartContainer for \"b5241f2d9a0e36735f295b6152803b7359854d4fdf3f210fa30a88f1a1198c8a\" returns successfully" Apr 24 23:37:18.672424 systemd-networkd[1399]: cali9afae3c7d3c: Gained IPv6LL Apr 24 23:37:19.284989 kubelet[2559]: I0424 23:37:19.284916 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kfr8z" podStartSLOduration=41.284902521 podStartE2EDuration="41.284902521s" podCreationTimestamp="2026-04-24 23:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:37:19.270030227 +0000 UTC m=+46.375932316" watchObservedRunningTime="2026-04-24 23:37:19.284902521 +0000 UTC m=+46.390804610" Apr 24 23:37:19.696650 systemd-networkd[1399]: calie0155b8aeee: Gained IPv6LL Apr 24 23:37:20.262827 kubelet[2559]: I0424 23:37:20.262434 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:20.479428 kubelet[2559]: I0424 23:37:20.478762 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-956fb69fd-vsnpb" podStartSLOduration=30.267823354 podStartE2EDuration="33.478735992s" podCreationTimestamp="2026-04-24 23:36:47 +0000 UTC" firstStartedPulling="2026-04-24 23:37:15.256445268 +0000 UTC m=+42.362347347" lastFinishedPulling="2026-04-24 23:37:18.467357906 +0000 UTC m=+45.573259985" observedRunningTime="2026-04-24 23:37:19.302240252 +0000 UTC m=+46.408142331" watchObservedRunningTime="2026-04-24 23:37:20.478735992 +0000 UTC m=+47.584638071" Apr 24 23:37:21.723640 containerd[1515]: time="2026-04-24T23:37:21.723580177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:21.724931 containerd[1515]: time="2026-04-24T23:37:21.724884722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 24 23:37:21.726137 containerd[1515]: time="2026-04-24T23:37:21.726045349Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:21.727873 containerd[1515]: time="2026-04-24T23:37:21.727847988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:21.728865 containerd[1515]: time="2026-04-24T23:37:21.728841492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.260979272s" Apr 24 23:37:21.728915 containerd[1515]: time="2026-04-24T23:37:21.728868534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 24 23:37:21.730611 containerd[1515]: time="2026-04-24T23:37:21.730414594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:37:21.746087 containerd[1515]: time="2026-04-24T23:37:21.746050475Z" level=info msg="CreateContainer within sandbox \"c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:37:21.770131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628300324.mount: Deactivated successfully. Apr 24 23:37:21.772355 containerd[1515]: time="2026-04-24T23:37:21.772318114Z" level=info msg="CreateContainer within sandbox \"c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"da509f4b4343818d366056e4ccde31858332793b597f08c494434d6284d777a2\"" Apr 24 23:37:21.773362 containerd[1515]: time="2026-04-24T23:37:21.773335844Z" level=info msg="StartContainer for \"da509f4b4343818d366056e4ccde31858332793b597f08c494434d6284d777a2\"" Apr 24 23:37:21.814441 systemd[1]: Started cri-containerd-da509f4b4343818d366056e4ccde31858332793b597f08c494434d6284d777a2.scope - libcontainer container da509f4b4343818d366056e4ccde31858332793b597f08c494434d6284d777a2. Apr 24 23:37:21.871680 containerd[1515]: time="2026-04-24T23:37:21.871617999Z" level=info msg="StartContainer for \"da509f4b4343818d366056e4ccde31858332793b597f08c494434d6284d777a2\" returns successfully" Apr 24 23:37:22.293361 kubelet[2559]: I0424 23:37:22.293160 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7978df8fd-fbqhx" podStartSLOduration=28.835081962 podStartE2EDuration="34.293138624s" podCreationTimestamp="2026-04-24 23:36:48 +0000 UTC" firstStartedPulling="2026-04-24 23:37:16.271943845 +0000 UTC m=+43.377845934" lastFinishedPulling="2026-04-24 23:37:21.730000517 +0000 UTC m=+48.835902596" observedRunningTime="2026-04-24 23:37:22.291745866 +0000 UTC m=+49.397647995" watchObservedRunningTime="2026-04-24 23:37:22.293138624 +0000 UTC m=+49.399040743" Apr 24 23:37:24.704349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount902466454.mount: Deactivated successfully. Apr 24 23:37:25.122997 containerd[1515]: time="2026-04-24T23:37:25.122872516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:25.124943 containerd[1515]: time="2026-04-24T23:37:25.124171179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 24 23:37:25.125126 containerd[1515]: time="2026-04-24T23:37:25.125102426Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:25.126929 containerd[1515]: time="2026-04-24T23:37:25.126906131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:25.128321 containerd[1515]: time="2026-04-24T23:37:25.128292596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.397849118s" Apr 24 23:37:25.128361 containerd[1515]: time="2026-04-24T23:37:25.128338445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 24 23:37:25.133740 containerd[1515]: time="2026-04-24T23:37:25.133691503Z" level=info msg="CreateContainer within sandbox \"97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:37:25.146233 containerd[1515]: time="2026-04-24T23:37:25.146190357Z" level=info msg="CreateContainer within sandbox \"97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389\"" Apr 24 23:37:25.147476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207384330.mount: Deactivated successfully. Apr 24 23:37:25.148261 containerd[1515]: time="2026-04-24T23:37:25.148230620Z" level=info msg="StartContainer for \"35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389\"" Apr 24 23:37:25.181913 systemd[1]: Started cri-containerd-35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389.scope - libcontainer container 35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389. Apr 24 23:37:25.224163 containerd[1515]: time="2026-04-24T23:37:25.224110514Z" level=info msg="StartContainer for \"35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389\" returns successfully" Apr 24 23:37:25.295072 kubelet[2559]: I0424 23:37:25.295017 2559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-v245s" podStartSLOduration=29.652603913 podStartE2EDuration="37.295003855s" podCreationTimestamp="2026-04-24 23:36:48 +0000 UTC" firstStartedPulling="2026-04-24 23:37:17.487324948 +0000 UTC m=+44.593227027" lastFinishedPulling="2026-04-24 23:37:25.12972488 +0000 UTC m=+52.235626969" observedRunningTime="2026-04-24 23:37:25.294512253 +0000 UTC m=+52.400414342" watchObservedRunningTime="2026-04-24 23:37:25.295003855 +0000 UTC m=+52.400905944" Apr 24 23:37:26.288325 kubelet[2559]: I0424 23:37:26.287844 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:28.815421 kubelet[2559]: I0424 23:37:28.815353 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:28.853306 systemd[1]: run-containerd-runc-k8s.io-35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389-runc.9D1Tkc.mount: Deactivated successfully. Apr 24 23:37:32.996188 containerd[1515]: time="2026-04-24T23:37:32.996065602Z" level=info msg="StopPodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\"" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.056 [WARNING][5525] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"13f36578-f50e-4376-ab00-85fdce7a7d13", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc", Pod:"calico-apiserver-956fb69fd-vsnpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia20395b46ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.057 [INFO][5525] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.057 [INFO][5525] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" iface="eth0" netns="" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.057 [INFO][5525] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.057 [INFO][5525] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.076 [INFO][5532] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.076 [INFO][5532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.076 [INFO][5532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.081 [WARNING][5532] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.081 [INFO][5532] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.083 [INFO][5532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.086846 containerd[1515]: 2026-04-24 23:37:33.084 [INFO][5525] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.086846 containerd[1515]: time="2026-04-24T23:37:33.086687341Z" level=info msg="TearDown network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" successfully" Apr 24 23:37:33.086846 containerd[1515]: time="2026-04-24T23:37:33.086711367Z" level=info msg="StopPodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" returns successfully" Apr 24 23:37:33.088205 containerd[1515]: time="2026-04-24T23:37:33.087344400Z" level=info msg="RemovePodSandbox for \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\"" Apr 24 23:37:33.088205 containerd[1515]: time="2026-04-24T23:37:33.087364270Z" level=info msg="Forcibly stopping sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\"" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.117 [WARNING][5546] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"13f36578-f50e-4376-ab00-85fdce7a7d13", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"0b73d88e411ad17359f7f9d4b1aa569f78fb95c6b78d97d0fd5235345ab27dfc", Pod:"calico-apiserver-956fb69fd-vsnpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia20395b46ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.117 [INFO][5546] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.117 [INFO][5546] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" iface="eth0" netns="" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.117 [INFO][5546] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.117 [INFO][5546] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.139 [INFO][5554] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.139 [INFO][5554] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.139 [INFO][5554] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.143 [WARNING][5554] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.143 [INFO][5554] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" HandleID="k8s-pod-network.9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--vsnpb-eth0" Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.144 [INFO][5554] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.148984 containerd[1515]: 2026-04-24 23:37:33.147 [INFO][5546] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788" Apr 24 23:37:33.149345 containerd[1515]: time="2026-04-24T23:37:33.149019464Z" level=info msg="TearDown network for sandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" successfully" Apr 24 23:37:33.153977 containerd[1515]: time="2026-04-24T23:37:33.153919116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.154124 containerd[1515]: time="2026-04-24T23:37:33.153993478Z" level=info msg="RemovePodSandbox \"9980c5df72ba336350261a535ce7e8ae51e0749431ce93092437ae4d69fa2788\" returns successfully" Apr 24 23:37:33.154537 containerd[1515]: time="2026-04-24T23:37:33.154510497Z" level=info msg="StopPodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\"" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.186 [WARNING][5569] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"09d823c4-6db7-41e1-b24a-cd89a1522a77", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176", Pod:"calico-apiserver-956fb69fd-pkvxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2f7ac6a0e52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.186 [INFO][5569] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.186 [INFO][5569] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" iface="eth0" netns="" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.186 [INFO][5569] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.186 [INFO][5569] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.203 [INFO][5576] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.203 [INFO][5576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.203 [INFO][5576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.208 [WARNING][5576] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.208 [INFO][5576] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.210 [INFO][5576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.214717 containerd[1515]: 2026-04-24 23:37:33.212 [INFO][5569] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.215618 containerd[1515]: time="2026-04-24T23:37:33.214754584Z" level=info msg="TearDown network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" successfully" Apr 24 23:37:33.215618 containerd[1515]: time="2026-04-24T23:37:33.214775887Z" level=info msg="StopPodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" returns successfully" Apr 24 23:37:33.215618 containerd[1515]: time="2026-04-24T23:37:33.215310652Z" level=info msg="RemovePodSandbox for \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\"" Apr 24 23:37:33.215618 containerd[1515]: time="2026-04-24T23:37:33.215341198Z" level=info msg="Forcibly stopping sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\"" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.245 [WARNING][5591] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0", GenerateName:"calico-apiserver-956fb69fd-", Namespace:"calico-system", SelfLink:"", UID:"09d823c4-6db7-41e1-b24a-cd89a1522a77", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"956fb69fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"c129523a5fb5e309396f292abf95ef6a19a899e35002355cd9a71e6b93240176", Pod:"calico-apiserver-956fb69fd-pkvxp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.119.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2f7ac6a0e52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.245 [INFO][5591] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.245 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" iface="eth0" netns="" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.245 [INFO][5591] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.245 [INFO][5591] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.261 [INFO][5598] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.261 [INFO][5598] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.261 [INFO][5598] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.266 [WARNING][5598] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.266 [INFO][5598] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" HandleID="k8s-pod-network.152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--apiserver--956fb69fd--pkvxp-eth0" Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.267 [INFO][5598] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.271570 containerd[1515]: 2026-04-24 23:37:33.269 [INFO][5591] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6" Apr 24 23:37:33.271570 containerd[1515]: time="2026-04-24T23:37:33.271498578Z" level=info msg="TearDown network for sandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" successfully" Apr 24 23:37:33.275110 containerd[1515]: time="2026-04-24T23:37:33.275031090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.275110 containerd[1515]: time="2026-04-24T23:37:33.275074755Z" level=info msg="RemovePodSandbox \"152849614f5e2b385379b0990a0e1e021ccd99891c2ab70d85f1e9e18bff61a6\" returns successfully" Apr 24 23:37:33.275500 containerd[1515]: time="2026-04-24T23:37:33.275468429Z" level=info msg="StopPodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\"" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.319 [WARNING][5612] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"10ad3905-de1e-4e02-b1ab-22144ab5e695", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9", Pod:"coredns-674b8bbfcf-kfr8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0155b8aeee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.319 [INFO][5612] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.319 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" iface="eth0" netns="" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.319 [INFO][5612] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.319 [INFO][5612] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.338 [INFO][5619] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.338 [INFO][5619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.338 [INFO][5619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.343 [WARNING][5619] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.343 [INFO][5619] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.344 [INFO][5619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.348457 containerd[1515]: 2026-04-24 23:37:33.346 [INFO][5612] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.348949 containerd[1515]: time="2026-04-24T23:37:33.348497118Z" level=info msg="TearDown network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" successfully" Apr 24 23:37:33.348949 containerd[1515]: time="2026-04-24T23:37:33.348830620Z" level=info msg="StopPodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" returns successfully" Apr 24 23:37:33.350338 containerd[1515]: time="2026-04-24T23:37:33.350213885Z" level=info msg="RemovePodSandbox for \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\"" Apr 24 23:37:33.350338 containerd[1515]: time="2026-04-24T23:37:33.350286383Z" level=info msg="Forcibly stopping sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\"" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.383 [WARNING][5634] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"10ad3905-de1e-4e02-b1ab-22144ab5e695", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"74f8528c87559a88c0aa12e87a4e988fd08dbe1c629d8bfc5b5a2cf2a738cdb9", Pod:"coredns-674b8bbfcf-kfr8z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0155b8aeee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.383 [INFO][5634] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.383 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" iface="eth0" netns="" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.383 [INFO][5634] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.383 [INFO][5634] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.401 [INFO][5642] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.401 [INFO][5642] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.401 [INFO][5642] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.406 [WARNING][5642] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.406 [INFO][5642] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" HandleID="k8s-pod-network.8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--kfr8z-eth0" Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.407 [INFO][5642] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.411772 containerd[1515]: 2026-04-24 23:37:33.409 [INFO][5634] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998" Apr 24 23:37:33.412148 containerd[1515]: time="2026-04-24T23:37:33.411817250Z" level=info msg="TearDown network for sandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" successfully" Apr 24 23:37:33.415464 containerd[1515]: time="2026-04-24T23:37:33.415435951Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.415544 containerd[1515]: time="2026-04-24T23:37:33.415486668Z" level=info msg="RemovePodSandbox \"8d78ac41699cdc544a3f1c3013ab57492eb81d0ea5500a0c2a230a093bd51998\" returns successfully" Apr 24 23:37:33.415982 containerd[1515]: time="2026-04-24T23:37:33.415942134Z" level=info msg="StopPodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\"" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.446 [WARNING][5656] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb3eda9a-d9f6-480e-8b39-2bd662594f1a", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba", Pod:"goldmane-5b85766d88-v245s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9afae3c7d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.446 [INFO][5656] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.446 [INFO][5656] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" iface="eth0" netns="" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.446 [INFO][5656] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.446 [INFO][5656] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.463 [INFO][5663] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.463 [INFO][5663] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.463 [INFO][5663] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.468 [WARNING][5663] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.468 [INFO][5663] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.469 [INFO][5663] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.473292 containerd[1515]: 2026-04-24 23:37:33.471 [INFO][5656] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.473796 containerd[1515]: time="2026-04-24T23:37:33.473329007Z" level=info msg="TearDown network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" successfully" Apr 24 23:37:33.473796 containerd[1515]: time="2026-04-24T23:37:33.473353263Z" level=info msg="StopPodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" returns successfully" Apr 24 23:37:33.474005 containerd[1515]: time="2026-04-24T23:37:33.473916682Z" level=info msg="RemovePodSandbox for \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\"" Apr 24 23:37:33.474056 containerd[1515]: time="2026-04-24T23:37:33.474006557Z" level=info msg="Forcibly stopping sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\"" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.504 [WARNING][5677] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cb3eda9a-d9f6-480e-8b39-2bd662594f1a", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"97d19674252fd6d56ad28a448d626e1a67f5a8aa06d5a3e8216a2ec8f5140cba", Pod:"goldmane-5b85766d88-v245s", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.119.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9afae3c7d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.504 [INFO][5677] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.504 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" iface="eth0" netns="" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.504 [INFO][5677] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.504 [INFO][5677] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.523 [INFO][5685] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.523 [INFO][5685] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.523 [INFO][5685] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.529 [WARNING][5685] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.530 [INFO][5685] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" HandleID="k8s-pod-network.fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Workload="ci--4081--3--6--n--61b787660f-k8s-goldmane--5b85766d88--v245s-eth0" Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.531 [INFO][5685] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.536746 containerd[1515]: 2026-04-24 23:37:33.533 [INFO][5677] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3" Apr 24 23:37:33.536746 containerd[1515]: time="2026-04-24T23:37:33.536006801Z" level=info msg="TearDown network for sandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" successfully" Apr 24 23:37:33.539642 containerd[1515]: time="2026-04-24T23:37:33.539611862Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.539694 containerd[1515]: time="2026-04-24T23:37:33.539665983Z" level=info msg="RemovePodSandbox \"fb0d9de352560758a71138d2150d17887e9c5217f3c6f56b403486acd9ac74e3\" returns successfully" Apr 24 23:37:33.540190 containerd[1515]: time="2026-04-24T23:37:33.540146617Z" level=info msg="StopPodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\"" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.566 [WARNING][5699] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f7be2f51-9471-4c71-a057-7cb4caf6a49f", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96", Pod:"coredns-674b8bbfcf-4tlwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c56cdfa323", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.566 [INFO][5699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.566 [INFO][5699] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" iface="eth0" netns="" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.566 [INFO][5699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.566 [INFO][5699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.582 [INFO][5707] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.582 [INFO][5707] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.582 [INFO][5707] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.587 [WARNING][5707] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.587 [INFO][5707] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.588 [INFO][5707] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.592751 containerd[1515]: 2026-04-24 23:37:33.590 [INFO][5699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.592751 containerd[1515]: time="2026-04-24T23:37:33.592608249Z" level=info msg="TearDown network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" successfully" Apr 24 23:37:33.592751 containerd[1515]: time="2026-04-24T23:37:33.592631173Z" level=info msg="StopPodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" returns successfully" Apr 24 23:37:33.593143 containerd[1515]: time="2026-04-24T23:37:33.593080741Z" level=info msg="RemovePodSandbox for \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\"" Apr 24 23:37:33.593143 containerd[1515]: time="2026-04-24T23:37:33.593109143Z" level=info msg="Forcibly stopping sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\"" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.622 [WARNING][5721] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f7be2f51-9471-4c71-a057-7cb4caf6a49f", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"5693d4b599f471aa3d41d4c324b437517fdbd3fc8204fde16978fa15af917d96", Pod:"coredns-674b8bbfcf-4tlwp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.119.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9c56cdfa323", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.622 [INFO][5721] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.622 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" iface="eth0" netns="" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.622 [INFO][5721] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.622 [INFO][5721] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.642 [INFO][5728] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.643 [INFO][5728] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.643 [INFO][5728] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.647 [WARNING][5728] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.647 [INFO][5728] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" HandleID="k8s-pod-network.37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Workload="ci--4081--3--6--n--61b787660f-k8s-coredns--674b8bbfcf--4tlwp-eth0" Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.648 [INFO][5728] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.652730 containerd[1515]: 2026-04-24 23:37:33.650 [INFO][5721] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f" Apr 24 23:37:33.653095 containerd[1515]: time="2026-04-24T23:37:33.652769091Z" level=info msg="TearDown network for sandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" successfully" Apr 24 23:37:33.656272 containerd[1515]: time="2026-04-24T23:37:33.656244828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.656329 containerd[1515]: time="2026-04-24T23:37:33.656298809Z" level=info msg="RemovePodSandbox \"37b7c15803f7c01af420ec9b93b027044630ab83ab33ac92df1b2c821b02e67f\" returns successfully" Apr 24 23:37:33.656788 containerd[1515]: time="2026-04-24T23:37:33.656765572Z" level=info msg="StopPodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\"" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.684 [WARNING][5742] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0", GenerateName:"calico-kube-controllers-7978df8fd-", Namespace:"calico-system", SelfLink:"", UID:"ab76dadb-e21e-4b46-b8de-e6bfe6616944", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7978df8fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce", Pod:"calico-kube-controllers-7978df8fd-fbqhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3946601103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.685 [INFO][5742] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.685 [INFO][5742] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" iface="eth0" netns="" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.685 [INFO][5742] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.685 [INFO][5742] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.702 [INFO][5749] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.702 [INFO][5749] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.702 [INFO][5749] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.708 [WARNING][5749] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.708 [INFO][5749] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.710 [INFO][5749] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.714347 containerd[1515]: 2026-04-24 23:37:33.712 [INFO][5742] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.714940 containerd[1515]: time="2026-04-24T23:37:33.714378535Z" level=info msg="TearDown network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" successfully" Apr 24 23:37:33.714940 containerd[1515]: time="2026-04-24T23:37:33.714399236Z" level=info msg="StopPodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" returns successfully" Apr 24 23:37:33.715041 containerd[1515]: time="2026-04-24T23:37:33.715016346Z" level=info msg="RemovePodSandbox for \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\"" Apr 24 23:37:33.715117 containerd[1515]: time="2026-04-24T23:37:33.715039581Z" level=info msg="Forcibly stopping sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\"" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.741 [WARNING][5763] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0", GenerateName:"calico-kube-controllers-7978df8fd-", Namespace:"calico-system", SelfLink:"", UID:"ab76dadb-e21e-4b46-b8de-e6bfe6616944", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7978df8fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-61b787660f", ContainerID:"c613b6887e9bc89997956e863d17dcd58cbd2a809a4dad12071f048547a807ce", Pod:"calico-kube-controllers-7978df8fd-fbqhx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.119.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif3946601103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.741 [INFO][5763] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.741 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" iface="eth0" netns="" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.741 [INFO][5763] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.741 [INFO][5763] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.757 [INFO][5771] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.757 [INFO][5771] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.757 [INFO][5771] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.762 [WARNING][5771] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.762 [INFO][5771] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" HandleID="k8s-pod-network.0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Workload="ci--4081--3--6--n--61b787660f-k8s-calico--kube--controllers--7978df8fd--fbqhx-eth0" Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.763 [INFO][5771] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.767843 containerd[1515]: 2026-04-24 23:37:33.765 [INFO][5763] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d" Apr 24 23:37:33.767843 containerd[1515]: time="2026-04-24T23:37:33.767698600Z" level=info msg="TearDown network for sandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" successfully" Apr 24 23:37:33.771170 containerd[1515]: time="2026-04-24T23:37:33.771131873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.771214 containerd[1515]: time="2026-04-24T23:37:33.771181678Z" level=info msg="RemovePodSandbox \"0fb5a76ab141a28e7da0fde18e5904060e301648335eb3b643dd789f0b97106d\" returns successfully" Apr 24 23:37:33.771695 containerd[1515]: time="2026-04-24T23:37:33.771644856Z" level=info msg="StopPodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\"" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.798 [WARNING][5786] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.798 [INFO][5786] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.798 [INFO][5786] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" iface="eth0" netns="" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.798 [INFO][5786] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.798 [INFO][5786] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.815 [INFO][5793] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.815 [INFO][5793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.815 [INFO][5793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.820 [WARNING][5793] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.820 [INFO][5793] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.821 [INFO][5793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.826626 containerd[1515]: 2026-04-24 23:37:33.823 [INFO][5786] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.826626 containerd[1515]: time="2026-04-24T23:37:33.825201488Z" level=info msg="TearDown network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" successfully" Apr 24 23:37:33.826626 containerd[1515]: time="2026-04-24T23:37:33.825232774Z" level=info msg="StopPodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" returns successfully" Apr 24 23:37:33.826929 containerd[1515]: time="2026-04-24T23:37:33.826656299Z" level=info msg="RemovePodSandbox for \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\"" Apr 24 23:37:33.826929 containerd[1515]: time="2026-04-24T23:37:33.826683670Z" level=info msg="Forcibly stopping sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\"" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.855 [WARNING][5807] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" WorkloadEndpoint="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.855 [INFO][5807] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.855 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" iface="eth0" netns="" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.855 [INFO][5807] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.855 [INFO][5807] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.873 [INFO][5814] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.873 [INFO][5814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.873 [INFO][5814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.879 [WARNING][5814] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.879 [INFO][5814] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" HandleID="k8s-pod-network.d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Workload="ci--4081--3--6--n--61b787660f-k8s-whisker--645dbcb77--rtzrr-eth0" Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.880 [INFO][5814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:33.887289 containerd[1515]: 2026-04-24 23:37:33.884 [INFO][5807] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea" Apr 24 23:37:33.887595 containerd[1515]: time="2026-04-24T23:37:33.887324976Z" level=info msg="TearDown network for sandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" successfully" Apr 24 23:37:33.890777 containerd[1515]: time="2026-04-24T23:37:33.890729205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:33.890840 containerd[1515]: time="2026-04-24T23:37:33.890783837Z" level=info msg="RemovePodSandbox \"d177f9e3661adf98e5ca308e40a3f593d9b4caf1a29a0abde198f4df5ed64cea\" returns successfully" Apr 24 23:37:51.926657 systemd[1]: run-containerd-runc-k8s.io-35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389-runc.HlpIpq.mount: Deactivated successfully. Apr 24 23:38:00.324957 kubelet[2559]: I0424 23:38:00.324478 2559 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:07.962429 systemd[1]: Started sshd@7-65.108.57.84:22-4.175.71.9:52060.service - OpenSSH per-connection server daemon (4.175.71.9:52060). Apr 24 23:38:08.187550 sshd[5944]: Accepted publickey for core from 4.175.71.9 port 52060 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:08.190899 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:08.199720 systemd-logind[1493]: New session 8 of user core. Apr 24 23:38:08.203922 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:38:08.493711 sshd[5944]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:08.498085 systemd-logind[1493]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:38:08.498479 systemd[1]: sshd@7-65.108.57.84:22-4.175.71.9:52060.service: Deactivated successfully. Apr 24 23:38:08.500473 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:38:08.503330 systemd-logind[1493]: Removed session 8. Apr 24 23:38:13.544218 systemd[1]: Started sshd@8-65.108.57.84:22-4.175.71.9:52068.service - OpenSSH per-connection server daemon (4.175.71.9:52068). Apr 24 23:38:13.749912 sshd[5961]: Accepted publickey for core from 4.175.71.9 port 52068 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:13.752909 sshd[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:13.761019 systemd-logind[1493]: New session 9 of user core. Apr 24 23:38:13.769028 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:38:14.013039 sshd[5961]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:14.018311 systemd[1]: sshd@8-65.108.57.84:22-4.175.71.9:52068.service: Deactivated successfully. Apr 24 23:38:14.021853 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:38:14.024934 systemd-logind[1493]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:38:14.027873 systemd-logind[1493]: Removed session 9. Apr 24 23:38:19.063340 systemd[1]: Started sshd@9-65.108.57.84:22-4.175.71.9:57326.service - OpenSSH per-connection server daemon (4.175.71.9:57326). Apr 24 23:38:19.304446 sshd[6018]: Accepted publickey for core from 4.175.71.9 port 57326 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:19.307773 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:19.315671 systemd-logind[1493]: New session 10 of user core. Apr 24 23:38:19.325125 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:38:19.600265 sshd[6018]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:19.605331 systemd[1]: sshd@9-65.108.57.84:22-4.175.71.9:57326.service: Deactivated successfully. Apr 24 23:38:19.608026 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:38:19.609282 systemd-logind[1493]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:38:19.610913 systemd-logind[1493]: Removed session 10. Apr 24 23:38:24.649228 systemd[1]: Started sshd@10-65.108.57.84:22-4.175.71.9:57330.service - OpenSSH per-connection server daemon (4.175.71.9:57330). Apr 24 23:38:24.872953 sshd[6067]: Accepted publickey for core from 4.175.71.9 port 57330 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:24.875899 sshd[6067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:24.884771 systemd-logind[1493]: New session 11 of user core. Apr 24 23:38:24.893034 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:38:25.140251 sshd[6067]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:25.145682 systemd[1]: sshd@10-65.108.57.84:22-4.175.71.9:57330.service: Deactivated successfully. Apr 24 23:38:25.147570 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:38:25.148441 systemd-logind[1493]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:38:25.149762 systemd-logind[1493]: Removed session 11. Apr 24 23:38:25.179192 systemd[1]: Started sshd@11-65.108.57.84:22-4.175.71.9:57336.service - OpenSSH per-connection server daemon (4.175.71.9:57336). Apr 24 23:38:25.389088 sshd[6081]: Accepted publickey for core from 4.175.71.9 port 57336 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:25.390923 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:25.396962 systemd-logind[1493]: New session 12 of user core. Apr 24 23:38:25.404061 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:38:25.702011 sshd[6081]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:25.706015 systemd[1]: sshd@11-65.108.57.84:22-4.175.71.9:57336.service: Deactivated successfully. Apr 24 23:38:25.707639 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:38:25.709155 systemd-logind[1493]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:38:25.710087 systemd-logind[1493]: Removed session 12. Apr 24 23:38:25.741391 systemd[1]: Started sshd@12-65.108.57.84:22-4.175.71.9:50100.service - OpenSSH per-connection server daemon (4.175.71.9:50100). Apr 24 23:38:25.955934 sshd[6092]: Accepted publickey for core from 4.175.71.9 port 50100 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:25.957365 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:25.962890 systemd-logind[1493]: New session 13 of user core. Apr 24 23:38:25.970149 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:38:26.216606 sshd[6092]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:26.220110 systemd-logind[1493]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:38:26.221071 systemd[1]: sshd@12-65.108.57.84:22-4.175.71.9:50100.service: Deactivated successfully. Apr 24 23:38:26.223026 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:38:26.224228 systemd-logind[1493]: Removed session 13. Apr 24 23:38:31.260200 systemd[1]: Started sshd@13-65.108.57.84:22-4.175.71.9:50116.service - OpenSSH per-connection server daemon (4.175.71.9:50116). Apr 24 23:38:31.472671 sshd[6125]: Accepted publickey for core from 4.175.71.9 port 50116 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:31.476044 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:31.486357 systemd-logind[1493]: New session 14 of user core. Apr 24 23:38:31.493086 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:38:31.727997 sshd[6125]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:31.730891 systemd[1]: sshd@13-65.108.57.84:22-4.175.71.9:50116.service: Deactivated successfully. Apr 24 23:38:31.733003 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:38:31.734366 systemd-logind[1493]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:38:31.736082 systemd-logind[1493]: Removed session 14. Apr 24 23:38:31.774148 systemd[1]: Started sshd@14-65.108.57.84:22-4.175.71.9:50132.service - OpenSSH per-connection server daemon (4.175.71.9:50132). Apr 24 23:38:32.004903 sshd[6138]: Accepted publickey for core from 4.175.71.9 port 50132 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:32.007692 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:32.016146 systemd-logind[1493]: New session 15 of user core. Apr 24 23:38:32.022076 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:38:32.417578 sshd[6138]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:32.425130 systemd[1]: sshd@14-65.108.57.84:22-4.175.71.9:50132.service: Deactivated successfully. Apr 24 23:38:32.429737 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:38:32.431634 systemd-logind[1493]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:38:32.433246 systemd-logind[1493]: Removed session 15. Apr 24 23:38:32.472391 systemd[1]: Started sshd@15-65.108.57.84:22-4.175.71.9:50148.service - OpenSSH per-connection server daemon (4.175.71.9:50148). Apr 24 23:38:32.706858 sshd[6159]: Accepted publickey for core from 4.175.71.9 port 50148 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:32.707797 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:32.713783 systemd-logind[1493]: New session 16 of user core. Apr 24 23:38:32.723053 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:38:33.359815 sshd[6159]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:33.364119 systemd[1]: sshd@15-65.108.57.84:22-4.175.71.9:50148.service: Deactivated successfully. Apr 24 23:38:33.367281 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:38:33.370160 systemd-logind[1493]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:38:33.371957 systemd-logind[1493]: Removed session 16. Apr 24 23:38:33.403524 systemd[1]: Started sshd@16-65.108.57.84:22-4.175.71.9:50150.service - OpenSSH per-connection server daemon (4.175.71.9:50150). Apr 24 23:38:33.615874 sshd[6187]: Accepted publickey for core from 4.175.71.9 port 50150 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:33.620117 sshd[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:33.628423 systemd-logind[1493]: New session 17 of user core. Apr 24 23:38:33.632095 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:38:33.951559 sshd[6187]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:33.955017 systemd-logind[1493]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:38:33.956137 systemd[1]: sshd@16-65.108.57.84:22-4.175.71.9:50150.service: Deactivated successfully. Apr 24 23:38:33.958068 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:38:33.959067 systemd-logind[1493]: Removed session 17. Apr 24 23:38:33.994790 systemd[1]: Started sshd@17-65.108.57.84:22-4.175.71.9:50152.service - OpenSSH per-connection server daemon (4.175.71.9:50152). Apr 24 23:38:34.203389 sshd[6198]: Accepted publickey for core from 4.175.71.9 port 50152 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:34.206989 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:34.216298 systemd-logind[1493]: New session 18 of user core. Apr 24 23:38:34.223414 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:38:34.470113 sshd[6198]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:34.473624 systemd-logind[1493]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:38:34.474138 systemd[1]: sshd@17-65.108.57.84:22-4.175.71.9:50152.service: Deactivated successfully. Apr 24 23:38:34.475934 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:38:34.476724 systemd-logind[1493]: Removed session 18. Apr 24 23:38:39.519184 systemd[1]: Started sshd@18-65.108.57.84:22-4.175.71.9:50922.service - OpenSSH per-connection server daemon (4.175.71.9:50922). Apr 24 23:38:39.749883 sshd[6221]: Accepted publickey for core from 4.175.71.9 port 50922 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:39.751342 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:39.755272 systemd-logind[1493]: New session 19 of user core. Apr 24 23:38:39.762112 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:38:39.987734 sshd[6221]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:39.992567 systemd[1]: sshd@18-65.108.57.84:22-4.175.71.9:50922.service: Deactivated successfully. Apr 24 23:38:39.996920 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:38:39.998169 systemd-logind[1493]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:38:39.999157 systemd-logind[1493]: Removed session 19. Apr 24 23:38:45.037253 systemd[1]: Started sshd@19-65.108.57.84:22-4.175.71.9:50924.service - OpenSSH per-connection server daemon (4.175.71.9:50924). Apr 24 23:38:45.257437 sshd[6234]: Accepted publickey for core from 4.175.71.9 port 50924 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:45.260366 sshd[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:45.269567 systemd-logind[1493]: New session 20 of user core. Apr 24 23:38:45.275049 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:38:45.512951 sshd[6234]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:45.519311 systemd[1]: sshd@19-65.108.57.84:22-4.175.71.9:50924.service: Deactivated successfully. Apr 24 23:38:45.522564 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:38:45.523478 systemd-logind[1493]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:38:45.524621 systemd-logind[1493]: Removed session 20. Apr 24 23:38:50.556085 systemd[1]: Started sshd@20-65.108.57.84:22-4.175.71.9:32986.service - OpenSSH per-connection server daemon (4.175.71.9:32986). Apr 24 23:38:50.768049 sshd[6304]: Accepted publickey for core from 4.175.71.9 port 32986 ssh2: RSA SHA256:/LB5UM8JE+Gm8PLCmanmk+IzzQFWk//dmRsy5hU4ZbM Apr 24 23:38:50.771334 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:50.779484 systemd-logind[1493]: New session 21 of user core. Apr 24 23:38:50.787022 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 24 23:38:51.027228 sshd[6304]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:51.034541 systemd[1]: sshd@20-65.108.57.84:22-4.175.71.9:32986.service: Deactivated successfully. Apr 24 23:38:51.036953 systemd[1]: session-21.scope: Deactivated successfully. Apr 24 23:38:51.038678 systemd-logind[1493]: Session 21 logged out. Waiting for processes to exit. Apr 24 23:38:51.039683 systemd-logind[1493]: Removed session 21. Apr 24 23:38:58.942441 systemd[1]: run-containerd-runc-k8s.io-35cae172a41f1e415795c2afd6d4258be273f536968aaa9843e1e3bf5097c389-runc.BiuAly.mount: Deactivated successfully. Apr 24 23:39:06.025730 kubelet[2559]: E0424 23:39:06.025228 2559 controller.go:195] "Failed to update lease" err="Put \"https://65.108.57.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-61b787660f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 24 23:39:06.451663 kubelet[2559]: E0424 23:39:06.451320 2559 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40242->10.0.0.2:2379: read: connection timed out" Apr 24 23:39:07.110855 systemd[1]: cri-containerd-60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2.scope: Deactivated successfully. Apr 24 23:39:07.112235 systemd[1]: cri-containerd-60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2.scope: Consumed 8.341s CPU time. Apr 24 23:39:07.139883 containerd[1515]: time="2026-04-24T23:39:07.137723943Z" level=info msg="shim disconnected" id=60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2 namespace=k8s.io Apr 24 23:39:07.139883 containerd[1515]: time="2026-04-24T23:39:07.137775169Z" level=warning msg="cleaning up after shim disconnected" id=60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2 namespace=k8s.io Apr 24 23:39:07.139883 containerd[1515]: time="2026-04-24T23:39:07.137782149Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:07.142905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2-rootfs.mount: Deactivated successfully. Apr 24 23:39:07.192506 systemd[1]: cri-containerd-e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964.scope: Deactivated successfully. Apr 24 23:39:07.192840 systemd[1]: cri-containerd-e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964.scope: Consumed 3.690s CPU time, 18.3M memory peak, 0B memory swap peak. Apr 24 23:39:07.219541 containerd[1515]: time="2026-04-24T23:39:07.219334560Z" level=info msg="shim disconnected" id=e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964 namespace=k8s.io Apr 24 23:39:07.219541 containerd[1515]: time="2026-04-24T23:39:07.219390023Z" level=warning msg="cleaning up after shim disconnected" id=e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964 namespace=k8s.io Apr 24 23:39:07.219541 containerd[1515]: time="2026-04-24T23:39:07.219399447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:07.219584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964-rootfs.mount: Deactivated successfully. Apr 24 23:39:07.550467 kubelet[2559]: I0424 23:39:07.550400 2559 scope.go:117] "RemoveContainer" containerID="60c4fd0b3a7c36d1a4c03bf8fc648bf83e6bdd84eb14687fcdbfd8dc24d9d9a2" Apr 24 23:39:07.553888 containerd[1515]: time="2026-04-24T23:39:07.553466200Z" level=info msg="CreateContainer within sandbox \"b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 24 23:39:07.554635 kubelet[2559]: I0424 23:39:07.554595 2559 scope.go:117] "RemoveContainer" containerID="e9066994fc1e6db2852a56b190eaa70683763cc5c7ed59f58404befc025eb964" Apr 24 23:39:07.559144 containerd[1515]: time="2026-04-24T23:39:07.559092557Z" level=info msg="CreateContainer within sandbox \"4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 24 23:39:07.584921 containerd[1515]: time="2026-04-24T23:39:07.583645396Z" level=info msg="CreateContainer within sandbox \"b3282e86a3bd00ef4d4f2de6ac1b965b81f07bf9f48acc62ad8349a03c1ec3e9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"643a8cfecc2d7b36f3909dbb9532e9122ea0954e53a15e061425e865403709fd\"" Apr 24 23:39:07.587853 containerd[1515]: time="2026-04-24T23:39:07.587071445Z" level=info msg="StartContainer for \"643a8cfecc2d7b36f3909dbb9532e9122ea0954e53a15e061425e865403709fd\"" Apr 24 23:39:07.588798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180823950.mount: Deactivated successfully. Apr 24 23:39:07.599039 containerd[1515]: time="2026-04-24T23:39:07.599010836Z" level=info msg="CreateContainer within sandbox \"4f855b284b1b39210d1b9d68689c423ade6a887f3303673918d5bf15682ca7c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"41453220d75357d18b1baa4b3b0f2d81b81537784becf034b45ae527ab50960e\"" Apr 24 23:39:07.599607 containerd[1515]: time="2026-04-24T23:39:07.599589511Z" level=info msg="StartContainer for \"41453220d75357d18b1baa4b3b0f2d81b81537784becf034b45ae527ab50960e\"" Apr 24 23:39:07.626064 systemd[1]: Started cri-containerd-643a8cfecc2d7b36f3909dbb9532e9122ea0954e53a15e061425e865403709fd.scope - libcontainer container 643a8cfecc2d7b36f3909dbb9532e9122ea0954e53a15e061425e865403709fd. Apr 24 23:39:07.629350 systemd[1]: Started cri-containerd-41453220d75357d18b1baa4b3b0f2d81b81537784becf034b45ae527ab50960e.scope - libcontainer container 41453220d75357d18b1baa4b3b0f2d81b81537784becf034b45ae527ab50960e. Apr 24 23:39:07.658921 containerd[1515]: time="2026-04-24T23:39:07.658867798Z" level=info msg="StartContainer for \"643a8cfecc2d7b36f3909dbb9532e9122ea0954e53a15e061425e865403709fd\" returns successfully" Apr 24 23:39:07.670304 containerd[1515]: time="2026-04-24T23:39:07.670258408Z" level=info msg="StartContainer for \"41453220d75357d18b1baa4b3b0f2d81b81537784becf034b45ae527ab50960e\" returns successfully" Apr 24 23:39:10.994633 kubelet[2559]: E0424 23:39:10.994330 2559 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40068->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-61b787660f.18a96f5563b18a39 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-61b787660f,UID:7b8cc8f950eb568dfaa7b800480292b2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-61b787660f,},FirstTimestamp:2026-04-24 23:39:00.509854265 +0000 UTC m=+147.615756384,LastTimestamp:2026-04-24 23:39:00.509854265 +0000 UTC m=+147.615756384,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-61b787660f,}" Apr 24 23:39:12.124251 systemd[1]: cri-containerd-354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8.scope: Deactivated successfully. Apr 24 23:39:12.125102 systemd[1]: cri-containerd-354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8.scope: Consumed 1.623s CPU time, 16.2M memory peak, 0B memory swap peak. Apr 24 23:39:12.146319 containerd[1515]: time="2026-04-24T23:39:12.145082103Z" level=info msg="shim disconnected" id=354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8 namespace=k8s.io Apr 24 23:39:12.146319 containerd[1515]: time="2026-04-24T23:39:12.145128533Z" level=warning msg="cleaning up after shim disconnected" id=354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8 namespace=k8s.io Apr 24 23:39:12.146319 containerd[1515]: time="2026-04-24T23:39:12.145137236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:39:12.146164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-354223ca335202ab4ca3131247672d9357ae347741d8521441ad7fd6b5529ed8-rootfs.mount: Deactivated successfully.