Apr 21 10:09:21.938209 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:09:21.938230 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:09:21.938243 kernel: BIOS-provided physical RAM map: Apr 21 10:09:21.938250 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:09:21.938255 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 21 10:09:21.938259 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 21 10:09:21.938264 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 21 10:09:21.938269 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 21 10:09:21.938273 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 21 10:09:21.938278 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 21 10:09:21.938282 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 21 10:09:21.938289 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 21 10:09:21.938294 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 21 10:09:21.938300 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 21 10:09:21.938308 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 10:09:21.938316 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:09:21.938326 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 10:09:21.938330 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 21 10:09:21.938335 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:09:21.938340 kernel: NX (Execute Disable) protection: active Apr 21 10:09:21.938344 kernel: APIC: Static calls initialized Apr 21 10:09:21.938349 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 10:09:21.938354 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 Apr 21 10:09:21.938358 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 21 10:09:21.938363 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 21 10:09:21.938368 kernel: SMBIOS 3.0.0 present. Apr 21 10:09:21.938373 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 21 10:09:21.938380 kernel: Hypervisor detected: KVM Apr 21 10:09:21.938391 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:09:21.938398 kernel: kvm-clock: using sched offset of 12653501335 cycles Apr 21 10:09:21.938404 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:09:21.938409 kernel: tsc: Detected 2396.400 MHz processor Apr 21 10:09:21.938414 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:09:21.938419 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:09:21.938424 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 21 10:09:21.938429 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:09:21.938434 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:09:21.938441 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 21 10:09:21.938446 kernel: Using GB pages for direct mapping Apr 21 10:09:21.938451 kernel: Secure boot disabled Apr 21 10:09:21.938463 kernel: ACPI: Early table checksum verification disabled Apr 21 10:09:21.938471 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 21 10:09:21.938478 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:09:21.938483 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:21.938491 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:21.938496 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 21 10:09:21.938501 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:21.938506 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:21.938511 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:21.938516 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:21.938521 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:09:21.938529 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 21 10:09:21.938537 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 21 10:09:21.938545 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 21 10:09:21.938553 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 21 10:09:21.938559 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 21 10:09:21.938564 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 21 10:09:21.938569 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 21 10:09:21.938574 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 21 10:09:21.938579 kernel: No NUMA configuration found Apr 21 10:09:21.938586 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 21 10:09:21.938591 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 21 10:09:21.938596 kernel: Zone ranges: Apr 21 10:09:21.938601 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:09:21.938608 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:09:21.938616 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:09:21.938623 kernel: Movable zone start for each node Apr 21 10:09:21.938631 kernel: Early memory node ranges Apr 21 10:09:21.938637 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:09:21.938642 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 21 10:09:21.938649 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 21 10:09:21.938654 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 21 10:09:21.938659 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:09:21.938664 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 21 10:09:21.938669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:09:21.938674 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:09:21.938679 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 21 10:09:21.938685 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 21 10:09:21.938693 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 21 10:09:21.938704 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 10:09:21.938712 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:09:21.938718 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:09:21.938723 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:09:21.938728 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:09:21.938733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:09:21.938738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:09:21.938743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:09:21.938748 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:09:21.938755 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:09:21.938760 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:09:21.938766 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:09:21.938774 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:09:21.938782 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 21 10:09:21.938800 kernel: Booting paravirtualized kernel on KVM Apr 21 10:09:21.938805 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:09:21.938810 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:09:21.938815 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:09:21.938823 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:09:21.938828 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:09:21.938833 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 21 10:09:21.938838 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:09:21.938843 kernel: random: crng init done Apr 21 10:09:21.938851 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:09:21.938859 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:09:21.938868 kernel: Fallback order for Node 0: 0 Apr 21 10:09:21.938876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 21 10:09:21.938881 kernel: Policy zone: Normal Apr 21 10:09:21.938886 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:09:21.938891 kernel: software IO TLB: area num 2. Apr 21 10:09:21.938896 kernel: Memory: 3827836K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 263128K reserved, 0K cma-reserved) Apr 21 10:09:21.938902 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:09:21.938907 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:09:21.938912 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:09:21.938917 kernel: Dynamic Preempt: voluntary Apr 21 10:09:21.938926 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:09:21.938935 kernel: rcu: RCU event tracing is enabled. Apr 21 10:09:21.938943 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:09:21.938951 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:09:21.938963 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:09:21.938971 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:09:21.938976 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:09:21.938981 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:09:21.938987 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:09:21.938992 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:09:21.938997 kernel: Console: colour dummy device 80x25 Apr 21 10:09:21.939005 kernel: printk: console [tty0] enabled Apr 21 10:09:21.939016 kernel: printk: console [ttyS0] enabled Apr 21 10:09:21.939025 kernel: ACPI: Core revision 20230628 Apr 21 10:09:21.939033 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:09:21.939041 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:09:21.939059 kernel: x2apic enabled Apr 21 10:09:21.939070 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:09:21.939079 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:09:21.939084 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:09:21.939090 kernel: Calibrating delay loop (skipped) preset value.. 4792.80 BogoMIPS (lpj=2396400) Apr 21 10:09:21.939095 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:09:21.939100 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:09:21.939106 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:09:21.939111 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:09:21.939116 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 21 10:09:21.939124 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:09:21.939129 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:09:21.939135 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:09:21.939140 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 21 10:09:21.939145 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:09:21.939150 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:09:21.939156 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:09:21.939161 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:09:21.939166 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:09:21.939174 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:09:21.939179 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:09:21.939184 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:09:21.939189 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:09:21.939195 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:09:21.939200 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:09:21.939205 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:09:21.939210 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:09:21.939215 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 21 10:09:21.939223 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 21 10:09:21.939228 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:09:21.939233 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:09:21.939239 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:09:21.939244 kernel: landlock: Up and running. Apr 21 10:09:21.939249 kernel: SELinux: Initializing. Apr 21 10:09:21.939254 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:09:21.939260 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:09:21.939265 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 21 10:09:21.939275 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:09:21.939283 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:09:21.939292 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:09:21.939300 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:09:21.939308 kernel: ... version: 0 Apr 21 10:09:21.939314 kernel: ... bit width: 48 Apr 21 10:09:21.939319 kernel: ... generic registers: 6 Apr 21 10:09:21.939324 kernel: ... value mask: 0000ffffffffffff Apr 21 10:09:21.939332 kernel: ... max period: 00007fffffffffff Apr 21 10:09:21.939337 kernel: ... fixed-purpose events: 0 Apr 21 10:09:21.939342 kernel: ... event mask: 000000000000003f Apr 21 10:09:21.939347 kernel: signal: max sigframe size: 3376 Apr 21 10:09:21.939352 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:09:21.939358 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:09:21.939363 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:09:21.939368 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:09:21.939374 kernel: .... node #0, CPUs: #1 Apr 21 10:09:21.939379 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:09:21.939386 kernel: smpboot: Max logical packages: 1 Apr 21 10:09:21.939392 kernel: smpboot: Total of 2 processors activated (9585.60 BogoMIPS) Apr 21 10:09:21.939397 kernel: devtmpfs: initialized Apr 21 10:09:21.939402 kernel: x86/mm: Memory block size: 128MB Apr 21 10:09:21.939408 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 21 10:09:21.939413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:09:21.939418 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:09:21.939423 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:09:21.939429 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:09:21.939436 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:09:21.939441 kernel: audit: type=2000 audit(1776766160.737:1): state=initialized audit_enabled=0 res=1 Apr 21 10:09:21.939447 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:09:21.939452 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:09:21.939457 kernel: cpuidle: using governor menu Apr 21 10:09:21.939462 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:09:21.939467 kernel: dca service started, version 1.12.1 Apr 21 10:09:21.939473 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 21 10:09:21.939478 kernel: PCI: Using configuration type 1 for base access Apr 21 10:09:21.939485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:09:21.939491 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:09:21.939496 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:09:21.939501 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:09:21.939506 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:09:21.939512 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:09:21.939517 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:09:21.939522 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:09:21.939527 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:09:21.939547 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:09:21.939553 kernel: ACPI: Interpreter enabled Apr 21 10:09:21.939558 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:09:21.939563 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:09:21.939568 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:09:21.939573 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:09:21.939578 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:09:21.939584 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:09:21.939737 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:09:21.939888 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:09:21.940002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:09:21.940012 kernel: PCI host bridge to bus 0000:00 Apr 21 10:09:21.940146 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:09:21.940249 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:09:21.940356 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:09:21.940463 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 21 10:09:21.940562 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 21 10:09:21.940667 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:09:21.940777 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:09:21.940922 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:09:21.941055 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 21 10:09:21.941179 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 21 10:09:21.941300 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 21 10:09:21.941412 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 21 10:09:21.941527 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:09:21.941639 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:09:21.941752 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:09:21.941892 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.942019 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 21 10:09:21.942148 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.942267 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 21 10:09:21.942394 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.942514 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 21 10:09:21.942632 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.942753 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 21 10:09:21.942902 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.943025 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 21 10:09:21.943140 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.943251 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 21 10:09:21.943356 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.943458 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 21 10:09:21.943561 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.943657 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 21 10:09:21.943775 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:21.943909 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 21 10:09:21.944035 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:09:21.944169 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:09:21.944303 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:09:21.944416 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 21 10:09:21.944529 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 21 10:09:21.944651 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:09:21.944761 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 21 10:09:21.944922 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:09:21.945062 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 21 10:09:21.945184 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 21 10:09:21.945303 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:09:21.945417 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:09:21.945531 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:09:21.945641 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:09:21.945769 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 10:09:21.945965 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 21 10:09:21.946093 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:09:21.946204 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:09:21.946337 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 10:09:21.946455 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 21 10:09:21.946575 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 21 10:09:21.946702 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:09:21.946880 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:09:21.946992 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:09:21.947130 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 10:09:21.947246 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 21 10:09:21.947366 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:09:21.947475 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:09:21.947606 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 10:09:21.947727 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 21 10:09:21.948186 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 21 10:09:21.948292 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:09:21.948388 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:09:21.948483 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:09:21.948590 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 10:09:21.948691 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 21 10:09:21.948998 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 21 10:09:21.949145 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:09:21.949266 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:09:21.949387 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:09:21.949395 kernel: acpiphp: Slot [0] registered Apr 21 10:09:21.949520 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:09:21.949645 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 21 10:09:21.949769 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 21 10:09:21.950241 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:09:21.950363 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:09:21.950479 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:09:21.950591 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:09:21.950599 kernel: acpiphp: Slot [0-2] registered Apr 21 10:09:21.950718 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:09:21.950863 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:09:21.950987 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:09:21.950995 kernel: acpiphp: Slot [0-3] registered Apr 21 10:09:21.951118 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:09:21.951233 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:09:21.951343 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:09:21.951355 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:09:21.951364 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:09:21.951372 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:09:21.951381 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:09:21.951390 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:09:21.951396 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:09:21.951401 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:09:21.951406 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:09:21.951412 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:09:21.951417 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:09:21.951422 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:09:21.951427 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:09:21.951432 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:09:21.951440 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:09:21.951446 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:09:21.951451 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:09:21.951456 kernel: iommu: Default domain type: Translated Apr 21 10:09:21.951462 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:09:21.951467 kernel: efivars: Registered efivars operations Apr 21 10:09:21.951472 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:09:21.951477 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:09:21.951483 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 21 10:09:21.951491 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 21 10:09:21.951496 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 21 10:09:21.951501 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 21 10:09:21.951608 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:09:21.951705 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:09:21.951812 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:09:21.951819 kernel: vgaarb: loaded Apr 21 10:09:21.951825 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:09:21.951830 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:09:21.951839 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:09:21.951844 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:09:21.951850 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:09:21.951855 kernel: pnp: PnP ACPI init Apr 21 10:09:21.951977 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 21 10:09:21.951986 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:09:21.951991 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:09:21.951997 kernel: NET: Registered PF_INET protocol family Apr 21 10:09:21.952019 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:09:21.952027 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:09:21.952033 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:09:21.952038 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:09:21.952044 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:09:21.952060 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:09:21.952066 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:09:21.952072 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:09:21.952080 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:09:21.952086 kernel: NET: Registered PF_XDP protocol family Apr 21 10:09:21.952205 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:09:21.952309 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:09:21.952417 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 10:09:21.952527 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 10:09:21.952654 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 10:09:21.952753 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 10:09:21.952900 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 10:09:21.953017 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 10:09:21.953150 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 21 10:09:21.953248 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:09:21.953361 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:09:21.953459 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:09:21.953567 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:09:21.953670 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:09:21.953767 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:09:21.955910 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:09:21.956029 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:09:21.956150 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:09:21.956263 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:09:21.956368 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:09:21.956464 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:09:21.956560 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:09:21.956657 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:09:21.956774 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:09:21.958559 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:09:21.958671 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 21 10:09:21.958776 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:09:21.958913 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 21 10:09:21.959014 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:09:21.959134 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:09:21.959250 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:09:21.959374 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 21 10:09:21.959472 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:09:21.959569 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:09:21.959666 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:09:21.959767 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 21 10:09:21.959915 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:09:21.960029 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:09:21.960156 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:09:21.960268 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:09:21.960378 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:09:21.960481 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 21 10:09:21.960588 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 21 10:09:21.960689 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:09:21.962852 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 21 10:09:21.962982 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:09:21.963114 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 21 10:09:21.963245 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 21 10:09:21.963356 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:09:21.963470 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:09:21.963588 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 21 10:09:21.963695 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:09:21.963816 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 21 10:09:21.963930 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:09:21.964031 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 21 10:09:21.964147 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 21 10:09:21.964252 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:09:21.964371 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 21 10:09:21.964477 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 21 10:09:21.964619 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:09:21.966937 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 21 10:09:21.967084 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 21 10:09:21.967199 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:09:21.967209 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:09:21.967216 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:09:21.967222 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:09:21.967228 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 21 10:09:21.967242 kernel: Initialise system trusted keyrings Apr 21 10:09:21.967249 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:09:21.967255 kernel: Key type asymmetric registered Apr 21 10:09:21.967261 kernel: Asymmetric key parser 'x509' registered Apr 21 10:09:21.967267 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:09:21.967273 kernel: io scheduler mq-deadline registered Apr 21 10:09:21.967279 kernel: io scheduler kyber registered Apr 21 10:09:21.967285 kernel: io scheduler bfq registered Apr 21 10:09:21.967401 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 21 10:09:21.967509 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 21 10:09:21.967643 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 21 10:09:21.967759 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 21 10:09:21.968908 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 21 10:09:21.969026 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 21 10:09:21.969139 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 21 10:09:21.969257 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 21 10:09:21.969359 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 21 10:09:21.969457 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 21 10:09:21.969563 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 21 10:09:21.969660 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 21 10:09:21.969760 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 21 10:09:21.970413 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 21 10:09:21.970521 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 21 10:09:21.970619 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 21 10:09:21.970626 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:09:21.970735 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 21 10:09:21.970888 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 21 10:09:21.970898 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:09:21.970904 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 21 10:09:21.970910 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:09:21.970915 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:09:21.970921 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:09:21.970927 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:09:21.970933 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:09:21.970939 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:09:21.971057 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:09:21.971152 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:09:21.971243 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:09:21 UTC (1776766161) Apr 21 10:09:21.971335 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:09:21.971342 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:09:21.971348 kernel: efifb: probing for efifb Apr 21 10:09:21.971354 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 21 10:09:21.971363 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 21 10:09:21.971368 kernel: efifb: scrolling: redraw Apr 21 10:09:21.971374 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:09:21.971380 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:09:21.971385 kernel: fb0: EFI VGA frame buffer device Apr 21 10:09:21.971391 kernel: pstore: Using crash dump compression: deflate Apr 21 10:09:21.971396 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:09:21.971402 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:09:21.971414 kernel: Segment Routing with IPv6 Apr 21 10:09:21.971422 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:09:21.971428 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:09:21.971433 kernel: Key type dns_resolver registered Apr 21 10:09:21.971439 kernel: IPI shorthand broadcast: enabled Apr 21 10:09:21.971445 kernel: sched_clock: Marking stable (1295011477, 223083315)->(1571257055, -53162263) Apr 21 10:09:21.971451 kernel: registered taskstats version 1 Apr 21 10:09:21.971456 kernel: Loading compiled-in X.509 certificates Apr 21 10:09:21.971462 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:09:21.971468 kernel: Key type .fscrypt registered Apr 21 10:09:21.971474 kernel: Key type fscrypt-provisioning registered Apr 21 10:09:21.971482 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:09:21.971488 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:09:21.971493 kernel: ima: No architecture policies found Apr 21 10:09:21.971499 kernel: clk: Disabling unused clocks Apr 21 10:09:21.971504 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:09:21.971510 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:09:21.971516 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:09:21.971522 kernel: Run /init as init process Apr 21 10:09:21.971530 kernel: with arguments: Apr 21 10:09:21.971536 kernel: /init Apr 21 10:09:21.971541 kernel: with environment: Apr 21 10:09:21.971550 kernel: HOME=/ Apr 21 10:09:21.971555 kernel: TERM=linux Apr 21 10:09:21.971563 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:09:21.971571 systemd[1]: Detected virtualization kvm. Apr 21 10:09:21.971580 systemd[1]: Detected architecture x86-64. Apr 21 10:09:21.971586 systemd[1]: Running in initrd. Apr 21 10:09:21.971592 systemd[1]: No hostname configured, using default hostname. Apr 21 10:09:21.971598 systemd[1]: Hostname set to . Apr 21 10:09:21.971604 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:09:21.971610 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:09:21.971616 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:09:21.971622 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:09:21.971629 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:09:21.971637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:09:21.971643 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:09:21.971649 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:09:21.971656 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:09:21.971662 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:09:21.971668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:09:21.971677 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:09:21.971683 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:09:21.971689 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:09:21.971695 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:09:21.971701 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:09:21.971707 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:09:21.971713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:09:21.971718 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:09:21.971725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:09:21.971733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:09:21.971739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:09:21.971745 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:09:21.971751 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:09:21.971757 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:09:21.971763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:09:21.971769 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:09:21.971775 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:09:21.971781 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:09:21.973803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:09:21.973842 systemd-journald[189]: Collecting audit messages is disabled. Apr 21 10:09:21.973859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:21.973870 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:09:21.973876 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:09:21.973882 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:09:21.973889 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:09:21.973896 systemd-journald[189]: Journal started Apr 21 10:09:21.973911 systemd-journald[189]: Runtime Journal (/run/log/journal/57c7d9c6554642069ded26c1018c216e) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:09:21.965778 systemd-modules-load[190]: Inserted module 'overlay' Apr 21 10:09:21.984701 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:09:21.983238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:21.986563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:09:21.993831 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:09:21.996009 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 21 10:09:21.996818 kernel: Bridge firewalling registered Apr 21 10:09:21.998100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:09:21.999149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:09:22.002952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:09:22.004715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:09:22.013433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:09:22.014922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:22.017917 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:09:22.029679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:09:22.031185 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:09:22.032163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:09:22.034413 dracut-cmdline[220]: dracut-dracut-053 Apr 21 10:09:22.036432 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:09:22.042964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:09:22.067239 systemd-resolved[235]: Positive Trust Anchors: Apr 21 10:09:22.067254 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:09:22.067276 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:09:22.069983 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 21 10:09:22.070907 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:09:22.071478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:09:22.113834 kernel: SCSI subsystem initialized Apr 21 10:09:22.121824 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:09:22.131821 kernel: iscsi: registered transport (tcp) Apr 21 10:09:22.149465 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:09:22.149531 kernel: QLogic iSCSI HBA Driver Apr 21 10:09:22.187139 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:09:22.192916 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:09:22.215466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:09:22.215535 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:09:22.215896 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:09:22.254829 kernel: raid6: avx512x4 gen() 43715 MB/s Apr 21 10:09:22.272825 kernel: raid6: avx512x2 gen() 45197 MB/s Apr 21 10:09:22.290824 kernel: raid6: avx512x1 gen() 42048 MB/s Apr 21 10:09:22.308861 kernel: raid6: avx2x4 gen() 43155 MB/s Apr 21 10:09:22.326839 kernel: raid6: avx2x2 gen() 47711 MB/s Apr 21 10:09:22.345866 kernel: raid6: avx2x1 gen() 37164 MB/s Apr 21 10:09:22.345916 kernel: raid6: using algorithm avx2x2 gen() 47711 MB/s Apr 21 10:09:22.365930 kernel: raid6: .... xor() 35598 MB/s, rmw enabled Apr 21 10:09:22.365967 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:09:22.382825 kernel: xor: automatically using best checksumming function avx Apr 21 10:09:22.508853 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:09:22.525268 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:09:22.530942 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:09:22.578875 systemd-udevd[408]: Using default interface naming scheme 'v255'. Apr 21 10:09:22.585400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:09:22.597019 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:09:22.615070 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Apr 21 10:09:22.656315 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:09:22.663002 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:09:22.738900 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:09:22.754169 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:09:22.785202 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:09:22.786023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:09:22.786678 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:09:22.787333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:09:22.792975 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:09:22.808545 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:09:22.851035 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:09:22.855660 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:09:22.856327 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:22.858132 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:09:22.858835 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:22.859530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:22.860450 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:22.869213 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:09:22.869406 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:09:22.872377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:22.884754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:22.887113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:22.889540 kernel: libata version 3.00 loaded. Apr 21 10:09:22.896528 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:22.909029 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:09:22.913013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:22.918602 kernel: ACPI: bus type USB registered Apr 21 10:09:22.918625 kernel: AES CTR mode by8 optimization enabled Apr 21 10:09:22.928835 kernel: usbcore: registered new interface driver usbfs Apr 21 10:09:22.929955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:09:22.940907 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:09:22.941210 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:09:22.941225 kernel: usbcore: registered new interface driver hub Apr 21 10:09:22.951668 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:09:22.951906 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:09:22.953557 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:22.963486 kernel: scsi host1: ahci Apr 21 10:09:22.963529 kernel: usbcore: registered new device driver usb Apr 21 10:09:22.967388 kernel: scsi host2: ahci Apr 21 10:09:22.967597 kernel: scsi host3: ahci Apr 21 10:09:22.967723 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:09:22.972807 kernel: scsi host4: ahci Apr 21 10:09:22.972958 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 21 10:09:22.973099 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:09:22.976999 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:09:22.977187 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:09:22.977315 kernel: scsi host5: ahci Apr 21 10:09:22.978925 kernel: scsi host6: ahci Apr 21 10:09:22.980948 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Apr 21 10:09:22.989745 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Apr 21 10:09:22.989770 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Apr 21 10:09:22.989780 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:09:22.989788 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Apr 21 10:09:22.989807 kernel: GPT:17805311 != 160006143 Apr 21 10:09:22.989815 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Apr 21 10:09:22.989822 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:09:22.989835 kernel: GPT:17805311 != 160006143 Apr 21 10:09:22.989842 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Apr 21 10:09:22.989850 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:09:22.989857 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:09:23.005584 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:09:23.301859 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:23.301948 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:09:23.313821 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:09:23.313960 kernel: ata1.00: applying bridge limits Apr 21 10:09:23.320020 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:23.326826 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:23.326879 kernel: ata1.00: configured for UDMA/100 Apr 21 10:09:23.333829 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:23.335882 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:09:23.335974 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:23.383959 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:09:23.384409 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 10:09:23.392072 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 10:09:23.404211 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:09:23.404381 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 10:09:23.413925 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 10:09:23.414102 kernel: hub 1-0:1.0: USB hub found Apr 21 10:09:23.414260 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Apr 21 10:09:23.420817 kernel: hub 1-0:1.0: 4 ports detected Apr 21 10:09:23.426810 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:09:23.426985 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (459) Apr 21 10:09:23.426995 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:09:23.429646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:09:23.441821 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 10:09:23.441989 kernel: hub 2-0:1.0: USB hub found Apr 21 10:09:23.442484 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:09:23.446832 kernel: hub 2-0:1.0: 4 ports detected Apr 21 10:09:23.449821 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:09:23.451934 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:09:23.452869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:09:23.456887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:09:23.461914 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:09:23.467366 disk-uuid[587]: Primary Header is updated. Apr 21 10:09:23.467366 disk-uuid[587]: Secondary Entries is updated. Apr 21 10:09:23.467366 disk-uuid[587]: Secondary Header is updated. Apr 21 10:09:23.475819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:09:23.683948 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 10:09:23.835014 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:09:23.845891 kernel: usbcore: registered new interface driver usbhid Apr 21 10:09:23.845951 kernel: usbhid: USB HID core driver Apr 21 10:09:23.861919 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 21 10:09:23.861972 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 10:09:24.486845 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:09:24.488572 disk-uuid[588]: The operation has completed successfully. Apr 21 10:09:24.556952 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:09:24.557055 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:09:24.567900 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:09:24.571436 sh[606]: Success Apr 21 10:09:24.584813 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:09:24.628124 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:09:24.630574 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:09:24.632311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:09:24.648296 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:09:24.648331 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:24.648347 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:09:24.652866 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:09:24.652879 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:09:24.661811 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:09:24.663691 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:09:24.664578 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:09:24.676925 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:09:24.679917 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:09:24.692947 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:24.692977 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:24.697139 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:09:24.705300 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:09:24.705335 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:09:24.718246 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:24.717956 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:09:24.724707 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:09:24.730914 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:09:24.781338 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:09:24.792431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:09:24.794442 ignition[730]: Ignition 2.19.0 Apr 21 10:09:24.795267 ignition[730]: Stage: fetch-offline Apr 21 10:09:24.795316 ignition[730]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:24.795328 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:24.795419 ignition[730]: parsed url from cmdline: "" Apr 21 10:09:24.795422 ignition[730]: no config URL provided Apr 21 10:09:24.795427 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:09:24.795434 ignition[730]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:09:24.795440 ignition[730]: failed to fetch config: resource requires networking Apr 21 10:09:24.799293 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:09:24.795566 ignition[730]: Ignition finished successfully Apr 21 10:09:24.809650 systemd-networkd[791]: lo: Link UP Apr 21 10:09:24.809659 systemd-networkd[791]: lo: Gained carrier Apr 21 10:09:24.811952 systemd-networkd[791]: Enumeration completed Apr 21 10:09:24.812036 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:09:24.812443 systemd[1]: Reached target network.target - Network. Apr 21 10:09:24.814013 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:24.814017 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:24.815503 systemd-networkd[791]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:24.815507 systemd-networkd[791]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:24.817206 systemd-networkd[791]: eth0: Link UP Apr 21 10:09:24.817210 systemd-networkd[791]: eth0: Gained carrier Apr 21 10:09:24.817216 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:24.818932 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:09:24.820990 systemd-networkd[791]: eth1: Link UP Apr 21 10:09:24.820994 systemd-networkd[791]: eth1: Gained carrier Apr 21 10:09:24.821000 systemd-networkd[791]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:24.830787 ignition[796]: Ignition 2.19.0 Apr 21 10:09:24.830813 ignition[796]: Stage: fetch Apr 21 10:09:24.830943 ignition[796]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:24.830953 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:24.831019 ignition[796]: parsed url from cmdline: "" Apr 21 10:09:24.831031 ignition[796]: no config URL provided Apr 21 10:09:24.831036 ignition[796]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:09:24.831043 ignition[796]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:09:24.831058 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 10:09:24.831179 ignition[796]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:09:24.865884 systemd-networkd[791]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:09:24.879837 systemd-networkd[791]: eth0: DHCPv4 address 37.27.217.213/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:09:25.031842 ignition[796]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 10:09:25.035529 ignition[796]: GET result: OK Apr 21 10:09:25.035645 ignition[796]: parsing config with SHA512: 8a584e6a64786b1921b7589b7d91ab4921622946ad9772c67ca1e6c44f7900f18e8763d043266c7a6fbc370819e90322ca33bb7d5bbfca24ec19a91fb0ccc4e3 Apr 21 10:09:25.041042 unknown[796]: fetched base config from "system" Apr 21 10:09:25.041062 unknown[796]: fetched base config from "system" Apr 21 10:09:25.041074 unknown[796]: fetched user config from "hetzner" Apr 21 10:09:25.043692 ignition[796]: fetch: fetch complete Apr 21 10:09:25.043705 ignition[796]: fetch: fetch passed Apr 21 10:09:25.045567 ignition[796]: Ignition finished successfully Apr 21 10:09:25.052303 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:09:25.061102 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:09:25.091614 ignition[804]: Ignition 2.19.0 Apr 21 10:09:25.091634 ignition[804]: Stage: kargs Apr 21 10:09:25.091952 ignition[804]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:25.091975 ignition[804]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:25.097821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:09:25.093338 ignition[804]: kargs: kargs passed Apr 21 10:09:25.093418 ignition[804]: Ignition finished successfully Apr 21 10:09:25.107456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:09:25.133195 ignition[811]: Ignition 2.19.0 Apr 21 10:09:25.133206 ignition[811]: Stage: disks Apr 21 10:09:25.137262 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:09:25.133374 ignition[811]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:25.138368 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:09:25.133387 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:25.139293 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:09:25.133975 ignition[811]: disks: disks passed Apr 21 10:09:25.140356 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:09:25.134013 ignition[811]: Ignition finished successfully Apr 21 10:09:25.141441 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:09:25.142505 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:09:25.149960 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:09:25.167613 systemd-fsck[819]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 10:09:25.170888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:09:25.176960 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:09:25.259820 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:09:25.260133 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:09:25.261073 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:09:25.267854 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:09:25.283902 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:09:25.287547 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 10:09:25.288273 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:09:25.289010 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:09:25.291339 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:09:25.303623 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (827) Apr 21 10:09:25.303643 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:25.303652 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:25.303660 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:09:25.301964 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:09:25.312943 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:09:25.312972 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:09:25.318442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:09:25.345574 coreos-metadata[829]: Apr 21 10:09:25.345 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 10:09:25.346913 coreos-metadata[829]: Apr 21 10:09:25.346 INFO Fetch successful Apr 21 10:09:25.348570 coreos-metadata[829]: Apr 21 10:09:25.348 INFO wrote hostname ci-4081-3-7-e-553501566f to /sysroot/etc/hostname Apr 21 10:09:25.349588 initrd-setup-root[855]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:09:25.350139 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:09:25.355009 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:09:25.358948 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:09:25.363825 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:09:25.445603 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:09:25.448945 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:09:25.451929 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:09:25.463817 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:25.477601 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:09:25.482763 ignition[948]: INFO : Ignition 2.19.0 Apr 21 10:09:25.482763 ignition[948]: INFO : Stage: mount Apr 21 10:09:25.484731 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:25.484731 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:25.484731 ignition[948]: INFO : mount: mount passed Apr 21 10:09:25.484731 ignition[948]: INFO : Ignition finished successfully Apr 21 10:09:25.484936 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:09:25.491885 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:09:25.644390 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:09:25.649005 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:09:25.660819 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (959) Apr 21 10:09:25.667108 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:25.667147 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:25.667160 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:09:25.676917 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:09:25.676978 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:09:25.685561 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:09:25.711473 ignition[975]: INFO : Ignition 2.19.0 Apr 21 10:09:25.712338 ignition[975]: INFO : Stage: files Apr 21 10:09:25.713612 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:25.713612 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:25.715047 ignition[975]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:09:25.716140 ignition[975]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:09:25.716140 ignition[975]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:09:25.721569 ignition[975]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:09:25.722034 ignition[975]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:09:25.722633 unknown[975]: wrote ssh authorized keys file for user: core Apr 21 10:09:25.723141 ignition[975]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:09:25.724988 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:09:25.725895 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:09:25.725895 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:09:25.725895 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:09:25.947035 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:09:26.138422 systemd-networkd[791]: eth1: Gained IPv6LL Apr 21 10:09:26.374351 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:09:26.374351 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:26.377923 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:09:26.778380 systemd-networkd[791]: eth0: Gained IPv6LL Apr 21 10:09:26.807338 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:09:27.125510 ignition[975]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:27.125510 ignition[975]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:09:27.128610 ignition[975]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:09:27.146759 ignition[975]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:09:27.146759 ignition[975]: INFO : files: files passed Apr 21 10:09:27.146759 ignition[975]: INFO : Ignition finished successfully Apr 21 10:09:27.130244 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:09:27.141430 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:09:27.144924 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:09:27.145834 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:09:27.146361 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:09:27.156423 initrd-setup-root-after-ignition[1005]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:09:27.156423 initrd-setup-root-after-ignition[1005]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:09:27.159074 initrd-setup-root-after-ignition[1009]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:09:27.162216 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:09:27.162972 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:09:27.167925 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:09:27.187835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:09:27.187933 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:09:27.188942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:09:27.189620 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:09:27.190443 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:09:27.191931 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:09:27.204204 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:09:27.209924 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:09:27.216402 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:09:27.216933 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:09:27.217764 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:09:27.218499 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:09:27.218599 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:09:27.219742 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:09:27.220502 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:09:27.221222 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:09:27.221973 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:09:27.222716 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:09:27.223498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:09:27.224207 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:09:27.224979 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:09:27.225686 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:09:27.226400 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:09:27.227110 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:09:27.227187 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:09:27.228182 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:09:27.228946 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:09:27.229671 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:09:27.229833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:09:27.230367 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:09:27.230438 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:09:27.231444 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:09:27.231526 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:09:27.232284 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:09:27.232376 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:09:27.233124 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 10:09:27.233203 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:09:27.239919 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:09:27.241919 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:09:27.242641 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:09:27.242727 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:09:27.243210 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:09:27.243285 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:09:27.248865 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:09:27.249896 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:09:27.251566 ignition[1029]: INFO : Ignition 2.19.0 Apr 21 10:09:27.251566 ignition[1029]: INFO : Stage: umount Apr 21 10:09:27.253056 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:27.253056 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:27.253056 ignition[1029]: INFO : umount: umount passed Apr 21 10:09:27.253056 ignition[1029]: INFO : Ignition finished successfully Apr 21 10:09:27.255201 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:09:27.255294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:09:27.256576 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:09:27.256618 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:09:27.257058 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:09:27.257098 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:09:27.257434 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:09:27.257465 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:09:27.257787 systemd[1]: Stopped target network.target - Network. Apr 21 10:09:27.260846 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:09:27.260917 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:09:27.261527 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:09:27.264625 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:09:27.269843 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:09:27.270234 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:09:27.270631 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:09:27.271054 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:09:27.271104 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:09:27.271500 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:09:27.271535 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:09:27.273850 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:09:27.273898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:09:27.274293 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:09:27.274330 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:09:27.274734 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:09:27.275170 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:09:27.276958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:09:27.277517 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:09:27.277602 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:09:27.277858 systemd-networkd[791]: eth0: DHCPv6 lease lost Apr 21 10:09:27.278767 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:09:27.278858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:09:27.282853 systemd-networkd[791]: eth1: DHCPv6 lease lost Apr 21 10:09:27.282877 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:09:27.282978 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:09:27.285040 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:09:27.285164 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:09:27.286705 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:09:27.286750 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:09:27.290896 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:09:27.291228 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:09:27.291271 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:09:27.291651 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:09:27.291687 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:09:27.292867 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:09:27.292904 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:09:27.293238 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:09:27.293274 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:09:27.293652 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:09:27.305667 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:09:27.305770 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:09:27.313183 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:09:27.313330 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:09:27.314258 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:09:27.314324 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:09:27.314993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:09:27.315035 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:09:27.315576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:09:27.315612 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:09:27.316578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:09:27.316614 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:09:27.317570 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:09:27.317608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:27.324936 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:09:27.325293 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:09:27.325343 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:09:27.325720 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:09:27.325757 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:09:27.326142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:09:27.326178 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:09:27.326569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:27.326609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:27.330955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:09:27.331057 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:09:27.332058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:09:27.333305 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:09:27.345642 systemd[1]: Switching root. Apr 21 10:09:27.389020 systemd-journald[189]: Journal stopped Apr 21 10:09:28.536334 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Apr 21 10:09:28.536400 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:09:28.536412 kernel: SELinux: policy capability open_perms=1 Apr 21 10:09:28.536421 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:09:28.536429 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:09:28.536438 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:09:28.536453 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:09:28.536466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:09:28.536475 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:09:28.536483 kernel: audit: type=1403 audit(1776766167.583:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:09:28.536492 systemd[1]: Successfully loaded SELinux policy in 45.974ms. Apr 21 10:09:28.536516 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.568ms. Apr 21 10:09:28.536526 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:09:28.536535 systemd[1]: Detected virtualization kvm. Apr 21 10:09:28.536544 systemd[1]: Detected architecture x86-64. Apr 21 10:09:28.536553 systemd[1]: Detected first boot. Apr 21 10:09:28.536562 systemd[1]: Hostname set to . Apr 21 10:09:28.536571 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:09:28.536580 zram_generator::config[1088]: No configuration found. Apr 21 10:09:28.536593 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:09:28.536602 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:09:28.536611 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:09:28.536620 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:09:28.536629 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:09:28.536638 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:09:28.536647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:09:28.536656 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:09:28.536665 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:09:28.536676 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:09:28.536685 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:09:28.536694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:09:28.536706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:09:28.536716 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:09:28.536727 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:09:28.536737 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:09:28.536745 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:09:28.536754 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:09:28.536765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:09:28.536774 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:09:28.536783 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:09:28.538868 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:09:28.538883 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:09:28.539052 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:09:28.539069 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:09:28.539079 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:09:28.539093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:09:28.539102 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:09:28.539112 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:09:28.539121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:09:28.539129 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:09:28.539138 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:09:28.539147 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:09:28.539156 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:09:28.539168 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:09:28.539177 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:28.539186 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:09:28.539196 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:09:28.539205 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:09:28.539214 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:09:28.539223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:28.539232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:09:28.539243 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:09:28.539253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:28.539262 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:09:28.539270 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:28.539279 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:09:28.539288 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:28.539297 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:09:28.539309 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 10:09:28.539327 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 10:09:28.539336 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:09:28.539346 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:09:28.539359 kernel: loop: module loaded Apr 21 10:09:28.539368 kernel: fuse: init (API version 7.39) Apr 21 10:09:28.539377 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:09:28.539386 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:09:28.539413 systemd-journald[1193]: Collecting audit messages is disabled. Apr 21 10:09:28.539434 kernel: ACPI: bus type drm_connector registered Apr 21 10:09:28.539443 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:09:28.539452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:28.539461 systemd-journald[1193]: Journal started Apr 21 10:09:28.539476 systemd-journald[1193]: Runtime Journal (/run/log/journal/57c7d9c6554642069ded26c1018c216e) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:09:28.552660 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:09:28.548843 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:09:28.549403 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:09:28.549879 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:09:28.550397 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:09:28.550927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:09:28.551383 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:09:28.552164 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:09:28.552899 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:09:28.553541 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:09:28.553699 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:09:28.554429 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:28.554582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:28.555311 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:09:28.555460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:09:28.556168 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:28.556361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:28.557223 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:09:28.557375 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:09:28.558131 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:28.558305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:28.559137 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:09:28.559753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:09:28.560389 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:09:28.570811 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:09:28.577882 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:09:28.580865 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:09:28.581329 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:09:28.586888 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:09:28.589753 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:09:28.590769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:09:28.594483 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:09:28.597082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:09:28.603165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:09:28.609301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:09:28.615932 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:09:28.617936 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:09:28.627065 systemd-journald[1193]: Time spent on flushing to /var/log/journal/57c7d9c6554642069ded26c1018c216e is 30.740ms for 1167 entries. Apr 21 10:09:28.627065 systemd-journald[1193]: System Journal (/var/log/journal/57c7d9c6554642069ded26c1018c216e) is 8.0M, max 584.8M, 576.8M free. Apr 21 10:09:28.694941 systemd-journald[1193]: Received client request to flush runtime journal. Apr 21 10:09:28.630979 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:09:28.634022 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:09:28.643986 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:09:28.689913 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 21 10:09:28.689924 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 21 10:09:28.698906 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:09:28.701853 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:09:28.716903 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:09:28.736714 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:09:28.749004 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:09:28.760755 udevadm[1251]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:09:28.770203 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:09:28.776951 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:09:28.791165 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Apr 21 10:09:28.791444 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Apr 21 10:09:28.796052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:09:28.956202 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:09:28.962959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:09:28.983924 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Apr 21 10:09:29.009221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:09:29.023042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:09:29.055907 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:09:29.109779 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 10:09:29.117428 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:09:29.138081 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:09:29.162814 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1280) Apr 21 10:09:29.193667 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:09:29.206578 systemd-networkd[1269]: lo: Link UP Apr 21 10:09:29.206588 systemd-networkd[1269]: lo: Gained carrier Apr 21 10:09:29.209853 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:09:29.210603 systemd-networkd[1269]: Enumeration completed Apr 21 10:09:29.210963 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:09:29.212102 systemd-networkd[1269]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:29.212111 systemd-networkd[1269]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:29.212735 systemd-networkd[1269]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:29.212744 systemd-networkd[1269]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:29.213282 systemd-networkd[1269]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:29.213316 systemd-networkd[1269]: eth0: Link UP Apr 21 10:09:29.213320 systemd-networkd[1269]: eth0: Gained carrier Apr 21 10:09:29.213327 systemd-networkd[1269]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:29.218027 systemd-networkd[1269]: eth1: Link UP Apr 21 10:09:29.218037 systemd-networkd[1269]: eth1: Gained carrier Apr 21 10:09:29.218047 systemd-networkd[1269]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:29.221272 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:09:29.228787 systemd-networkd[1269]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:29.234327 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 10:09:29.234641 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Apr 21 10:09:29.235549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:29.235658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:29.238117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:29.241234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:29.248419 systemd-networkd[1269]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:09:29.249128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:29.251118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:09:29.251151 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:09:29.251183 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:29.258366 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:29.260448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:29.264421 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:29.264587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:29.270048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:09:29.271911 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:09:29.272861 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:09:29.275276 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:29.275465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:29.276906 systemd-networkd[1269]: eth0: DHCPv4 address 37.27.217.213/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:09:29.277471 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:09:29.289815 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:09:29.322671 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:09:29.327110 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:09:29.327249 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:09:29.327394 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:09:29.323307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:29.328962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:29.329194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:29.340958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:29.344687 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 21 10:09:29.344738 kernel: Console: switching to colour dummy device 80x25 Apr 21 10:09:29.347811 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 21 10:09:29.348014 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 10:09:29.348029 kernel: [drm] features: -context_init Apr 21 10:09:29.350888 kernel: [drm] number of scanouts: 1 Apr 21 10:09:29.350924 kernel: [drm] number of cap sets: 0 Apr 21 10:09:29.354890 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 10:09:29.357811 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 21 10:09:29.364193 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:09:29.369980 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 10:09:29.373463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:29.373720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:29.379907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:29.423852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:29.506041 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:09:29.520163 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:09:29.536081 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:09:29.574364 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:09:29.576474 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:09:29.582055 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:09:29.587323 lvm[1340]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:09:29.617709 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:09:29.618582 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:09:29.619266 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:09:29.619289 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:09:29.619356 systemd[1]: Reached target machines.target - Containers. Apr 21 10:09:29.620348 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:09:29.630124 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:09:29.636093 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:09:29.637287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:09:29.643942 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:09:29.648120 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:09:29.651258 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:09:29.655847 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:09:29.671625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:09:29.677089 kernel: loop0: detected capacity change from 0 to 8 Apr 21 10:09:29.686812 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:09:29.699432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:09:29.700089 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:09:29.706816 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:09:29.745867 kernel: loop2: detected capacity change from 0 to 140768 Apr 21 10:09:29.789831 kernel: loop3: detected capacity change from 0 to 228704 Apr 21 10:09:29.835131 kernel: loop4: detected capacity change from 0 to 8 Apr 21 10:09:29.841961 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:09:29.865821 kernel: loop6: detected capacity change from 0 to 140768 Apr 21 10:09:29.888083 kernel: loop7: detected capacity change from 0 to 228704 Apr 21 10:09:29.911376 (sd-merge)[1361]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 10:09:29.912004 (sd-merge)[1361]: Merged extensions into '/usr'. Apr 21 10:09:29.915857 systemd[1]: Reloading requested from client PID 1348 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:09:29.915954 systemd[1]: Reloading... Apr 21 10:09:29.952827 zram_generator::config[1386]: No configuration found. Apr 21 10:09:30.052815 ldconfig[1345]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:09:30.094218 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:09:30.153325 systemd[1]: Reloading finished in 236 ms. Apr 21 10:09:30.172342 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:09:30.178458 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:09:30.188080 systemd[1]: Starting ensure-sysext.service... Apr 21 10:09:30.191710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:09:30.211733 systemd[1]: Reloading requested from client PID 1439 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:09:30.211763 systemd[1]: Reloading... Apr 21 10:09:30.221684 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:09:30.222435 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:09:30.223389 systemd-tmpfiles[1440]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:09:30.223645 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Apr 21 10:09:30.223707 systemd-tmpfiles[1440]: ACLs are not supported, ignoring. Apr 21 10:09:30.227148 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:09:30.227250 systemd-tmpfiles[1440]: Skipping /boot Apr 21 10:09:30.239327 systemd-tmpfiles[1440]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:09:30.239416 systemd-tmpfiles[1440]: Skipping /boot Apr 21 10:09:30.285823 zram_generator::config[1470]: No configuration found. Apr 21 10:09:30.385918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:09:30.441769 systemd[1]: Reloading finished in 228 ms. Apr 21 10:09:30.459362 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:09:30.487306 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:09:30.496038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:09:30.511077 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:09:30.522388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:09:30.526528 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:09:30.532636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:30.534175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:30.542801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:30.549884 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:30.553013 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:30.558988 augenrules[1541]: No rules Apr 21 10:09:30.558346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:09:30.558472 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:30.566456 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:09:30.571551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:30.572485 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:30.575737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:30.576015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:30.579989 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:30.581974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:30.587386 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:09:30.600450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:30.600652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:30.602908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:30.608443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:30.622765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:30.623372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:09:30.632049 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:09:30.632900 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:30.641993 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:09:30.644363 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:30.646026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:30.651066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:30.651243 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:30.654035 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:30.654206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:30.662765 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:30.665684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:30.673011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:30.681736 systemd-resolved[1536]: Positive Trust Anchors: Apr 21 10:09:30.681895 systemd-resolved[1536]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:09:30.681918 systemd-resolved[1536]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:09:30.684067 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:09:30.687004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:30.695710 systemd-resolved[1536]: Using system hostname 'ci-4081-3-7-e-553501566f'. Apr 21 10:09:30.698706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:30.699412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:09:30.699534 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:30.703120 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:09:30.705453 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:09:30.708904 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:09:30.712087 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:30.712343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:30.714422 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:09:30.714648 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:09:30.717393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:30.717622 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:30.719395 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:30.719608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:30.728507 systemd[1]: Finished ensure-sysext.service. Apr 21 10:09:30.733101 systemd[1]: Reached target network.target - Network. Apr 21 10:09:30.736756 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:09:30.737549 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:09:30.737637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:09:30.744943 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:09:30.745459 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:09:30.788171 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:09:30.788960 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:09:30.789544 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:09:30.791204 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:09:30.791744 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:09:30.792464 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:09:30.792490 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:09:30.793580 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:09:30.794859 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:09:30.797649 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:09:30.798082 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:09:30.800885 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:09:30.802618 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:09:30.806008 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:09:30.808777 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:09:30.810330 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:09:30.810842 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:09:30.811531 systemd[1]: System is tainted: cgroupsv1 Apr 21 10:09:30.811568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:09:30.811591 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:09:30.813411 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:09:30.818957 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:09:30.822567 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:09:30.837928 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:09:30.844421 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:09:30.845422 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:09:30.852015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:09:30.854044 jq[1600]: false Apr 21 10:09:30.857883 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:09:30.871273 coreos-metadata[1597]: Apr 21 10:09:30.870 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 10:09:30.872365 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 10:09:30.873073 dbus-daemon[1599]: [system] SELinux support is enabled Apr 21 10:09:30.878873 coreos-metadata[1597]: Apr 21 10:09:30.878 INFO Fetch successful Apr 21 10:09:30.878873 coreos-metadata[1597]: Apr 21 10:09:30.878 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 10:09:30.880179 coreos-metadata[1597]: Apr 21 10:09:30.880 INFO Fetch successful Apr 21 10:09:30.883496 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:09:30.904001 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:09:30.909168 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:09:30.911495 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:09:30.917773 extend-filesystems[1603]: Found loop4 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found loop5 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found loop6 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found loop7 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found sda Apr 21 10:09:30.917773 extend-filesystems[1603]: Found sda1 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found sda2 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found sda3 Apr 21 10:09:30.917773 extend-filesystems[1603]: Found usr Apr 21 10:09:30.956992 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 21 10:09:30.918031 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:09:30.957127 extend-filesystems[1603]: Found sda4 Apr 21 10:09:30.957127 extend-filesystems[1603]: Found sda6 Apr 21 10:09:30.957127 extend-filesystems[1603]: Found sda7 Apr 21 10:09:30.957127 extend-filesystems[1603]: Found sda9 Apr 21 10:09:30.957127 extend-filesystems[1603]: Checking size of /dev/sda9 Apr 21 10:09:30.957127 extend-filesystems[1603]: Resized partition /dev/sda9 Apr 21 10:09:30.924893 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:09:30.973821 extend-filesystems[1634]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:09:30.983519 update_engine[1623]: I20260421 10:09:30.937871 1623 main.cc:92] Flatcar Update Engine starting Apr 21 10:09:30.983519 update_engine[1623]: I20260421 10:09:30.958086 1623 update_check_scheduler.cc:74] Next update check in 6m47s Apr 21 10:09:30.931377 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:09:30.986987 jq[1626]: true Apr 21 10:09:30.939330 systemd-networkd[1269]: eth0: Gained IPv6LL Apr 21 10:09:30.942654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:09:30.972159 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:09:30.972425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:09:30.972717 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:09:30.979099 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:09:30.989262 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:09:30.989511 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:09:31.030805 (ntainerd)[1641]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:09:31.041599 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:09:31.043502 jq[1640]: true Apr 21 10:09:31.055567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:31.056228 tar[1639]: linux-amd64/LICENSE Apr 21 10:09:31.059187 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:09:31.059520 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:09:31.059540 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:09:31.059864 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:09:31.059879 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:09:31.072723 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:09:31.609337 systemd-resolved[1536]: Clock change detected. Flushing caches. Apr 21 10:09:31.609822 systemd-timesyncd[1592]: Contacted time server 37.221.199.157:123 (0.flatcar.pool.ntp.org). Apr 21 10:09:31.609890 systemd-timesyncd[1592]: Initial clock synchronization to Tue 2026-04-21 10:09:31.609288 UTC. Apr 21 10:09:31.621248 tar[1639]: linux-amd64/helm Apr 21 10:09:31.636599 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:09:31.655859 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1273) Apr 21 10:09:31.662055 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:09:31.722319 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:09:31.723525 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:09:31.732804 systemd-networkd[1269]: eth1: Gained IPv6LL Apr 21 10:09:31.735542 systemd-logind[1622]: New seat seat0. Apr 21 10:09:31.737437 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:09:31.741470 systemd-logind[1622]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 10:09:31.741493 systemd-logind[1622]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:09:31.742459 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:09:31.798054 bash[1691]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:09:31.800475 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:09:31.817027 systemd[1]: Starting sshkeys.service... Apr 21 10:09:31.833656 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:09:31.843625 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:09:31.846501 containerd[1641]: time="2026-04-21T10:09:31.845145001Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:09:31.881542 coreos-metadata[1703]: Apr 21 10:09:31.881 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 10:09:31.884872 coreos-metadata[1703]: Apr 21 10:09:31.883 INFO Fetch successful Apr 21 10:09:31.888701 unknown[1703]: wrote ssh authorized keys file for user: core Apr 21 10:09:31.890064 locksmithd[1667]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:09:31.904591 containerd[1641]: time="2026-04-21T10:09:31.904548216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.909898481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.909928967Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.909941376Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.910084891Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.910096458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.910149208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:31.910349 containerd[1641]: time="2026-04-21T10:09:31.910157540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.911387 containerd[1641]: time="2026-04-21T10:09:31.910645983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:31.911387 containerd[1641]: time="2026-04-21T10:09:31.910662828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.911387 containerd[1641]: time="2026-04-21T10:09:31.910677861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:31.911387 containerd[1641]: time="2026-04-21T10:09:31.910685522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.911387 containerd[1641]: time="2026-04-21T10:09:31.910759874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.911589 containerd[1641]: time="2026-04-21T10:09:31.911552102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:31.913865 containerd[1641]: time="2026-04-21T10:09:31.912983639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:31.913865 containerd[1641]: time="2026-04-21T10:09:31.913003229Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:09:31.913865 containerd[1641]: time="2026-04-21T10:09:31.913094505Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:09:31.913865 containerd[1641]: time="2026-04-21T10:09:31.913132773Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:09:31.920109 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 21 10:09:31.935740 extend-filesystems[1634]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:09:31.935740 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:09:31.935740 extend-filesystems[1634]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 21 10:09:31.945087 extend-filesystems[1603]: Resized filesystem in /dev/sda9 Apr 21 10:09:31.945087 extend-filesystems[1603]: Found sr0 Apr 21 10:09:31.940156 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:09:31.940414 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:09:31.952565 sshd_keygen[1630]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:09:31.955194 containerd[1641]: time="2026-04-21T10:09:31.954783870Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:09:31.957850 update-ssh-keys[1712]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:09:31.959133 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966449378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966489748Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966504761Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966517129Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966655777Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966876358Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966972322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966983189Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.966992813Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.967002658Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.967021286Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.967029588Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.967041115Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974437 containerd[1641]: time="2026-04-21T10:09:31.967051231Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.969301 systemd[1]: Finished sshkeys.service. Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967060835Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967069879Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967078061Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967092492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967102087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967110840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967121676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967130509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967139763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967148687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967157710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967166524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967176178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.974709 containerd[1641]: time="2026-04-21T10:09:31.967184380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967193524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967201826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967215487Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967232162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967240024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967247525Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967285171Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967297891Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967305222Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967313344Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967320555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967328947Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967336498Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:09:31.975157 containerd[1641]: time="2026-04-21T10:09:31.967344240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:09:31.975354 containerd[1641]: time="2026-04-21T10:09:31.967522027Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:09:31.975354 containerd[1641]: time="2026-04-21T10:09:31.967562427Z" level=info msg="Connect containerd service" Apr 21 10:09:31.975354 containerd[1641]: time="2026-04-21T10:09:31.967589528Z" level=info msg="using legacy CRI server" Apr 21 10:09:31.975354 containerd[1641]: time="2026-04-21T10:09:31.967594385Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:09:31.975354 containerd[1641]: time="2026-04-21T10:09:31.967653734Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:09:31.980408 containerd[1641]: time="2026-04-21T10:09:31.979876208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:09:31.980408 containerd[1641]: time="2026-04-21T10:09:31.980179333Z" level=info msg="Start subscribing containerd event" Apr 21 10:09:31.980408 containerd[1641]: time="2026-04-21T10:09:31.980218862Z" level=info msg="Start recovering state" Apr 21 10:09:31.977021 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:09:31.980546 containerd[1641]: time="2026-04-21T10:09:31.980456979Z" level=info msg="Start event monitor" Apr 21 10:09:31.980546 containerd[1641]: time="2026-04-21T10:09:31.980490219Z" level=info msg="Start snapshots syncer" Apr 21 10:09:31.980546 containerd[1641]: time="2026-04-21T10:09:31.980497670Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:09:31.980546 containerd[1641]: time="2026-04-21T10:09:31.980503249Z" level=info msg="Start streaming server" Apr 21 10:09:31.984113 containerd[1641]: time="2026-04-21T10:09:31.981095267Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:09:31.984113 containerd[1641]: time="2026-04-21T10:09:31.981141786Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:09:31.984113 containerd[1641]: time="2026-04-21T10:09:31.981189889Z" level=info msg="containerd successfully booted in 0.140680s" Apr 21 10:09:31.981796 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:09:31.991222 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:09:32.015454 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:09:32.015711 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:09:32.030040 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:09:32.042552 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:09:32.055142 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:09:32.059257 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:09:32.059678 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:09:32.305707 tar[1639]: linux-amd64/README.md Apr 21 10:09:32.320668 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:09:32.639966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:32.646147 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:09:32.647669 systemd[1]: Startup finished in 7.290s (kernel) + 4.574s (userspace) = 11.864s. Apr 21 10:09:32.656378 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:09:33.109374 kubelet[1757]: E0421 10:09:33.109226 1757 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:09:33.113162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:09:33.113679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:09:35.699243 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:09:35.707192 systemd[1]: Started sshd@0-37.27.217.213:22-50.85.169.122:44840.service - OpenSSH per-connection server daemon (50.85.169.122:44840). Apr 21 10:09:35.934086 sshd[1770]: Accepted publickey for core from 50.85.169.122 port 44840 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:35.938070 sshd[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:35.951939 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:09:35.959162 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:09:35.962420 systemd-logind[1622]: New session 1 of user core. Apr 21 10:09:35.987498 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:09:35.997559 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:09:36.015993 (systemd)[1776]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:09:36.127719 systemd[1776]: Queued start job for default target default.target. Apr 21 10:09:36.128165 systemd[1776]: Created slice app.slice - User Application Slice. Apr 21 10:09:36.128180 systemd[1776]: Reached target paths.target - Paths. Apr 21 10:09:36.128190 systemd[1776]: Reached target timers.target - Timers. Apr 21 10:09:36.132961 systemd[1776]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:09:36.140991 systemd[1776]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:09:36.141090 systemd[1776]: Reached target sockets.target - Sockets. Apr 21 10:09:36.141116 systemd[1776]: Reached target basic.target - Basic System. Apr 21 10:09:36.141191 systemd[1776]: Reached target default.target - Main User Target. Apr 21 10:09:36.141255 systemd[1776]: Startup finished in 113ms. Apr 21 10:09:36.141418 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:09:36.142967 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:09:36.317398 systemd[1]: Started sshd@1-37.27.217.213:22-50.85.169.122:44848.service - OpenSSH per-connection server daemon (50.85.169.122:44848). Apr 21 10:09:36.542864 sshd[1788]: Accepted publickey for core from 50.85.169.122 port 44848 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:36.545805 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:36.554179 systemd-logind[1622]: New session 2 of user core. Apr 21 10:09:36.557319 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:09:36.716294 sshd[1788]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:36.722483 systemd-logind[1622]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:09:36.724650 systemd[1]: sshd@1-37.27.217.213:22-50.85.169.122:44848.service: Deactivated successfully. Apr 21 10:09:36.730013 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:09:36.731559 systemd-logind[1622]: Removed session 2. Apr 21 10:09:36.755249 systemd[1]: Started sshd@2-37.27.217.213:22-50.85.169.122:44852.service - OpenSSH per-connection server daemon (50.85.169.122:44852). Apr 21 10:09:36.968990 sshd[1796]: Accepted publickey for core from 50.85.169.122 port 44852 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:36.971614 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:36.980526 systemd-logind[1622]: New session 3 of user core. Apr 21 10:09:36.990452 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:09:37.138424 sshd[1796]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:37.145569 systemd[1]: sshd@2-37.27.217.213:22-50.85.169.122:44852.service: Deactivated successfully. Apr 21 10:09:37.150884 systemd-logind[1622]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:09:37.152680 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:09:37.155084 systemd-logind[1622]: Removed session 3. Apr 21 10:09:37.175324 systemd[1]: Started sshd@3-37.27.217.213:22-50.85.169.122:44858.service - OpenSSH per-connection server daemon (50.85.169.122:44858). Apr 21 10:09:37.400885 sshd[1804]: Accepted publickey for core from 50.85.169.122 port 44858 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:37.402724 sshd[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:37.410367 systemd-logind[1622]: New session 4 of user core. Apr 21 10:09:37.419322 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:09:37.576298 sshd[1804]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:37.582405 systemd-logind[1622]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:09:37.585250 systemd[1]: sshd@3-37.27.217.213:22-50.85.169.122:44858.service: Deactivated successfully. Apr 21 10:09:37.589678 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:09:37.592187 systemd-logind[1622]: Removed session 4. Apr 21 10:09:37.613317 systemd[1]: Started sshd@4-37.27.217.213:22-50.85.169.122:44870.service - OpenSSH per-connection server daemon (50.85.169.122:44870). Apr 21 10:09:37.832923 sshd[1812]: Accepted publickey for core from 50.85.169.122 port 44870 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:37.834583 sshd[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:37.841163 systemd-logind[1622]: New session 5 of user core. Apr 21 10:09:37.852159 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:09:37.986796 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:09:37.987634 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:38.008132 sudo[1816]: pam_unix(sudo:session): session closed for user root Apr 21 10:09:38.041315 sshd[1812]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:38.047065 systemd[1]: sshd@4-37.27.217.213:22-50.85.169.122:44870.service: Deactivated successfully. Apr 21 10:09:38.053947 systemd-logind[1622]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:09:38.054254 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:09:38.057197 systemd-logind[1622]: Removed session 5. Apr 21 10:09:38.078197 systemd[1]: Started sshd@5-37.27.217.213:22-50.85.169.122:44876.service - OpenSSH per-connection server daemon (50.85.169.122:44876). Apr 21 10:09:38.305893 sshd[1821]: Accepted publickey for core from 50.85.169.122 port 44876 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:38.310125 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:38.317653 systemd-logind[1622]: New session 6 of user core. Apr 21 10:09:38.324377 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:09:38.441315 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:09:38.441620 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:38.446108 sudo[1826]: pam_unix(sudo:session): session closed for user root Apr 21 10:09:38.452080 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:09:38.452378 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:38.471254 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:09:38.476728 auditctl[1829]: No rules Apr 21 10:09:38.473730 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:09:38.474314 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:09:38.485779 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:09:38.527893 augenrules[1848]: No rules Apr 21 10:09:38.529092 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:09:38.534079 sudo[1825]: pam_unix(sudo:session): session closed for user root Apr 21 10:09:38.565004 sshd[1821]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:38.569609 systemd[1]: sshd@5-37.27.217.213:22-50.85.169.122:44876.service: Deactivated successfully. Apr 21 10:09:38.574593 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:09:38.576512 systemd-logind[1622]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:09:38.577759 systemd-logind[1622]: Removed session 6. Apr 21 10:09:38.601350 systemd[1]: Started sshd@6-37.27.217.213:22-50.85.169.122:44880.service - OpenSSH per-connection server daemon (50.85.169.122:44880). Apr 21 10:09:38.804802 sshd[1857]: Accepted publickey for core from 50.85.169.122 port 44880 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:38.807447 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:38.815677 systemd-logind[1622]: New session 7 of user core. Apr 21 10:09:38.821335 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:09:38.948444 sudo[1861]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:09:38.949166 sudo[1861]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:39.245198 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:09:39.258581 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:09:39.502050 dockerd[1877]: time="2026-04-21T10:09:39.500675566Z" level=info msg="Starting up" Apr 21 10:09:39.708983 dockerd[1877]: time="2026-04-21T10:09:39.708939031Z" level=info msg="Loading containers: start." Apr 21 10:09:39.814014 kernel: Initializing XFRM netlink socket Apr 21 10:09:39.883605 systemd-networkd[1269]: docker0: Link UP Apr 21 10:09:39.903144 dockerd[1877]: time="2026-04-21T10:09:39.903099081Z" level=info msg="Loading containers: done." Apr 21 10:09:39.919241 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3806266370-merged.mount: Deactivated successfully. Apr 21 10:09:39.923617 dockerd[1877]: time="2026-04-21T10:09:39.923571540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:09:39.923791 dockerd[1877]: time="2026-04-21T10:09:39.923663688Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:09:39.923791 dockerd[1877]: time="2026-04-21T10:09:39.923747604Z" level=info msg="Daemon has completed initialization" Apr 21 10:09:39.951216 dockerd[1877]: time="2026-04-21T10:09:39.950969597Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:09:39.951183 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:09:40.388630 containerd[1641]: time="2026-04-21T10:09:40.388582536Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:09:41.037493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746881762.mount: Deactivated successfully. Apr 21 10:09:42.129663 containerd[1641]: time="2026-04-21T10:09:42.129595255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:42.130860 containerd[1641]: time="2026-04-21T10:09:42.130663998Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30194089" Apr 21 10:09:42.131768 containerd[1641]: time="2026-04-21T10:09:42.131735686Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:42.134305 containerd[1641]: time="2026-04-21T10:09:42.134118680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:42.134788 containerd[1641]: time="2026-04-21T10:09:42.134756517Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.746127371s" Apr 21 10:09:42.134823 containerd[1641]: time="2026-04-21T10:09:42.134790838Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:09:42.135255 containerd[1641]: time="2026-04-21T10:09:42.135229927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:09:43.222944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:09:43.230541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:43.367079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:43.370827 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:09:43.402858 kubelet[2090]: E0421 10:09:43.402323 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:09:43.407028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:09:43.407215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:09:43.431452 containerd[1641]: time="2026-04-21T10:09:43.431396156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:43.432633 containerd[1641]: time="2026-04-21T10:09:43.432434243Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171469" Apr 21 10:09:43.434269 containerd[1641]: time="2026-04-21T10:09:43.433580462Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:43.436538 containerd[1641]: time="2026-04-21T10:09:43.436489276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:43.438859 containerd[1641]: time="2026-04-21T10:09:43.437877628Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.302619339s" Apr 21 10:09:43.438859 containerd[1641]: time="2026-04-21T10:09:43.437904549Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:09:43.439244 containerd[1641]: time="2026-04-21T10:09:43.439147613Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:09:44.490503 containerd[1641]: time="2026-04-21T10:09:44.490440783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:44.491739 containerd[1641]: time="2026-04-21T10:09:44.491536186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289778" Apr 21 10:09:44.493104 containerd[1641]: time="2026-04-21T10:09:44.492710738Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:44.495212 containerd[1641]: time="2026-04-21T10:09:44.495180312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:44.496063 containerd[1641]: time="2026-04-21T10:09:44.496039511Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.05674015s" Apr 21 10:09:44.496192 containerd[1641]: time="2026-04-21T10:09:44.496115455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:09:44.496823 containerd[1641]: time="2026-04-21T10:09:44.496797818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:09:45.569939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385151135.mount: Deactivated successfully. Apr 21 10:09:45.943352 containerd[1641]: time="2026-04-21T10:09:45.943213602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:45.944788 containerd[1641]: time="2026-04-21T10:09:45.944618349Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010739" Apr 21 10:09:45.946681 containerd[1641]: time="2026-04-21T10:09:45.945571569Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:45.948335 containerd[1641]: time="2026-04-21T10:09:45.947588664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:45.948335 containerd[1641]: time="2026-04-21T10:09:45.948214012Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.451386469s" Apr 21 10:09:45.948335 containerd[1641]: time="2026-04-21T10:09:45.948253141Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:09:45.948796 containerd[1641]: time="2026-04-21T10:09:45.948762505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:09:46.527639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3374491545.mount: Deactivated successfully. Apr 21 10:09:47.439069 containerd[1641]: time="2026-04-21T10:09:47.439000922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:47.440279 containerd[1641]: time="2026-04-21T10:09:47.440141173Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Apr 21 10:09:47.441905 containerd[1641]: time="2026-04-21T10:09:47.441628514Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:47.444738 containerd[1641]: time="2026-04-21T10:09:47.444688724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:47.445731 containerd[1641]: time="2026-04-21T10:09:47.445562475Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.496755834s" Apr 21 10:09:47.445731 containerd[1641]: time="2026-04-21T10:09:47.445596656Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:09:47.446517 containerd[1641]: time="2026-04-21T10:09:47.446489515Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:09:47.977855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032715704.mount: Deactivated successfully. Apr 21 10:09:47.988326 containerd[1641]: time="2026-04-21T10:09:47.986815554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:47.988326 containerd[1641]: time="2026-04-21T10:09:47.987654913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 21 10:09:47.989166 containerd[1641]: time="2026-04-21T10:09:47.989098649Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:47.993927 containerd[1641]: time="2026-04-21T10:09:47.993136776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:47.994919 containerd[1641]: time="2026-04-21T10:09:47.994561804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.039128ms" Apr 21 10:09:47.994919 containerd[1641]: time="2026-04-21T10:09:47.994609896Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:09:47.995699 containerd[1641]: time="2026-04-21T10:09:47.995454943Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:09:48.584922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584180794.mount: Deactivated successfully. Apr 21 10:09:49.486396 containerd[1641]: time="2026-04-21T10:09:49.486300552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:49.488456 containerd[1641]: time="2026-04-21T10:09:49.488366651Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719532" Apr 21 10:09:49.489463 containerd[1641]: time="2026-04-21T10:09:49.489419981Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:49.497352 containerd[1641]: time="2026-04-21T10:09:49.497154182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:49.499435 containerd[1641]: time="2026-04-21T10:09:49.499006170Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.503505458s" Apr 21 10:09:49.499435 containerd[1641]: time="2026-04-21T10:09:49.499047392Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:09:52.161407 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:52.168107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:52.207963 systemd[1]: Reloading requested from client PID 2260 ('systemctl') (unit session-7.scope)... Apr 21 10:09:52.208097 systemd[1]: Reloading... Apr 21 10:09:52.304896 zram_generator::config[2301]: No configuration found. Apr 21 10:09:52.403242 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:09:52.464982 systemd[1]: Reloading finished in 255 ms. Apr 21 10:09:52.513035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:52.516314 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:52.517622 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:09:52.517961 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:52.523104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:52.669025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:52.677963 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:09:52.709007 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:09:52.709007 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:09:52.709007 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:09:52.709412 kubelet[2368]: I0421 10:09:52.709072 2368 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:09:53.260160 kubelet[2368]: I0421 10:09:53.260102 2368 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:09:53.260160 kubelet[2368]: I0421 10:09:53.260154 2368 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:09:53.260548 kubelet[2368]: I0421 10:09:53.260521 2368 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:09:53.297790 kubelet[2368]: E0421 10:09:53.297737 2368 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://37.27.217.213:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:09:53.299093 kubelet[2368]: I0421 10:09:53.298928 2368 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:09:53.305154 kubelet[2368]: E0421 10:09:53.305130 2368 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:09:53.305913 kubelet[2368]: I0421 10:09:53.305242 2368 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:09:53.308208 kubelet[2368]: I0421 10:09:53.308179 2368 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:09:53.309198 kubelet[2368]: I0421 10:09:53.309136 2368 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:09:53.309291 kubelet[2368]: I0421 10:09:53.309161 2368 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-e-553501566f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:09:53.309416 kubelet[2368]: I0421 10:09:53.309295 2368 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:09:53.309416 kubelet[2368]: I0421 10:09:53.309304 2368 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:09:53.309416 kubelet[2368]: I0421 10:09:53.309410 2368 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:09:53.315102 kubelet[2368]: I0421 10:09:53.315071 2368 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:09:53.315158 kubelet[2368]: I0421 10:09:53.315122 2368 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:09:53.315178 kubelet[2368]: I0421 10:09:53.315165 2368 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:09:53.317881 kubelet[2368]: I0421 10:09:53.317828 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:09:53.324003 kubelet[2368]: E0421 10:09:53.323537 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://37.27.217.213:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-e-553501566f&limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:09:53.324003 kubelet[2368]: E0421 10:09:53.323789 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://37.27.217.213:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:09:53.324162 kubelet[2368]: I0421 10:09:53.324151 2368 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:09:53.324534 kubelet[2368]: I0421 10:09:53.324524 2368 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:09:53.325100 kubelet[2368]: W0421 10:09:53.325089 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:09:53.329738 kubelet[2368]: I0421 10:09:53.329708 2368 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:09:53.329900 kubelet[2368]: I0421 10:09:53.329815 2368 server.go:1289] "Started kubelet" Apr 21 10:09:53.336880 kubelet[2368]: I0421 10:09:53.336786 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:09:53.342086 kubelet[2368]: I0421 10:09:53.341952 2368 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:09:53.348871 kubelet[2368]: I0421 10:09:53.347486 2368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:09:53.348871 kubelet[2368]: I0421 10:09:53.347515 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:09:53.348871 kubelet[2368]: I0421 10:09:53.347697 2368 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:09:53.348871 kubelet[2368]: I0421 10:09:53.348110 2368 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:09:53.351357 kubelet[2368]: I0421 10:09:53.351336 2368 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:09:53.351648 kubelet[2368]: E0421 10:09:53.351625 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-e-553501566f\" not found" Apr 21 10:09:53.352405 kubelet[2368]: I0421 10:09:53.352382 2368 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:09:53.352538 kubelet[2368]: I0421 10:09:53.352525 2368 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:09:53.353494 kubelet[2368]: E0421 10:09:53.353468 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://37.27.217.213:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:09:53.353692 kubelet[2368]: E0421 10:09:53.353657 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.217.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-553501566f?timeout=10s\": dial tcp 37.27.217.213:6443: connect: connection refused" interval="200ms" Apr 21 10:09:53.354087 kubelet[2368]: I0421 10:09:53.354061 2368 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:09:53.357193 kubelet[2368]: E0421 10:09:53.355265 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.217.213:6443/api/v1/namespaces/default/events\": dial tcp 37.27.217.213:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-e-553501566f.18a857706a586209 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-e-553501566f,UID:ci-4081-3-7-e-553501566f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-e-553501566f,},FirstTimestamp:2026-04-21 10:09:53.329791497 +0000 UTC m=+0.646830406,LastTimestamp:2026-04-21 10:09:53.329791497 +0000 UTC m=+0.646830406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-e-553501566f,}" Apr 21 10:09:53.357459 kubelet[2368]: I0421 10:09:53.357411 2368 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:09:53.357459 kubelet[2368]: I0421 10:09:53.357424 2368 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:09:53.371180 kubelet[2368]: I0421 10:09:53.371155 2368 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:09:53.372207 kubelet[2368]: I0421 10:09:53.372183 2368 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:09:53.372207 kubelet[2368]: I0421 10:09:53.372197 2368 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:09:53.372261 kubelet[2368]: I0421 10:09:53.372213 2368 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:09:53.372261 kubelet[2368]: I0421 10:09:53.372219 2368 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:09:53.372261 kubelet[2368]: E0421 10:09:53.372250 2368 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:09:53.378716 kubelet[2368]: E0421 10:09:53.378703 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://37.27.217.213:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:09:53.378765 kubelet[2368]: E0421 10:09:53.378742 2368 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:09:53.390794 kubelet[2368]: I0421 10:09:53.390746 2368 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:09:53.390794 kubelet[2368]: I0421 10:09:53.390755 2368 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:09:53.390794 kubelet[2368]: I0421 10:09:53.390768 2368 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:09:53.394015 kubelet[2368]: I0421 10:09:53.393867 2368 policy_none.go:49] "None policy: Start" Apr 21 10:09:53.394015 kubelet[2368]: I0421 10:09:53.393881 2368 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:09:53.394015 kubelet[2368]: I0421 10:09:53.393890 2368 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:09:53.403869 kubelet[2368]: E0421 10:09:53.402766 2368 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:09:53.403869 kubelet[2368]: I0421 10:09:53.402930 2368 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:09:53.403869 kubelet[2368]: I0421 10:09:53.402939 2368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:09:53.404392 kubelet[2368]: I0421 10:09:53.404381 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:09:53.407385 kubelet[2368]: E0421 10:09:53.407375 2368 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:09:53.407454 kubelet[2368]: E0421 10:09:53.407446 2368 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-e-553501566f\" not found" Apr 21 10:09:53.484684 kubelet[2368]: E0421 10:09:53.484648 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.498147 kubelet[2368]: E0421 10:09:53.498108 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.504614 kubelet[2368]: E0421 10:09:53.504583 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.506047 kubelet[2368]: I0421 10:09:53.506013 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.506785 kubelet[2368]: E0421 10:09:53.506701 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.217.213:6443/api/v1/nodes\": dial tcp 37.27.217.213:6443: connect: connection refused" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.553903 kubelet[2368]: I0421 10:09:53.552807 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aff57d252410a2aa90fa2327a6c017e-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-e-553501566f\" (UID: \"4aff57d252410a2aa90fa2327a6c017e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.553903 kubelet[2368]: I0421 10:09:53.552923 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.553903 kubelet[2368]: I0421 10:09:53.552970 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.553903 kubelet[2368]: I0421 10:09:53.553000 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a72b91f833fcc107081de9a04b2a5359-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-e-553501566f\" (UID: \"a72b91f833fcc107081de9a04b2a5359\") " pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.553903 kubelet[2368]: I0421 10:09:53.553024 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aff57d252410a2aa90fa2327a6c017e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-e-553501566f\" (UID: \"4aff57d252410a2aa90fa2327a6c017e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.554242 kubelet[2368]: I0421 10:09:53.553048 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aff57d252410a2aa90fa2327a6c017e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-e-553501566f\" (UID: \"4aff57d252410a2aa90fa2327a6c017e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.554242 kubelet[2368]: I0421 10:09:53.553074 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.554242 kubelet[2368]: I0421 10:09:53.553096 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.554242 kubelet[2368]: I0421 10:09:53.553120 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:53.556071 kubelet[2368]: E0421 10:09:53.555183 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.217.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-553501566f?timeout=10s\": dial tcp 37.27.217.213:6443: connect: connection refused" interval="400ms" Apr 21 10:09:53.710007 kubelet[2368]: I0421 10:09:53.709953 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.711010 kubelet[2368]: E0421 10:09:53.710917 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.217.213:6443/api/v1/nodes\": dial tcp 37.27.217.213:6443: connect: connection refused" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:53.788273 containerd[1641]: time="2026-04-21T10:09:53.788175794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-e-553501566f,Uid:4aff57d252410a2aa90fa2327a6c017e,Namespace:kube-system,Attempt:0,}" Apr 21 10:09:53.806690 containerd[1641]: time="2026-04-21T10:09:53.805556645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-e-553501566f,Uid:b313f488886b897fa1bdcbfdcabbe9fd,Namespace:kube-system,Attempt:0,}" Apr 21 10:09:53.806690 containerd[1641]: time="2026-04-21T10:09:53.806273330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-e-553501566f,Uid:a72b91f833fcc107081de9a04b2a5359,Namespace:kube-system,Attempt:0,}" Apr 21 10:09:53.956376 kubelet[2368]: E0421 10:09:53.956311 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.217.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-553501566f?timeout=10s\": dial tcp 37.27.217.213:6443: connect: connection refused" interval="800ms" Apr 21 10:09:54.113969 kubelet[2368]: I0421 10:09:54.113599 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:54.114375 kubelet[2368]: E0421 10:09:54.114345 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.217.213:6443/api/v1/nodes\": dial tcp 37.27.217.213:6443: connect: connection refused" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:54.209012 kubelet[2368]: E0421 10:09:54.208946 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://37.27.217.213:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:09:54.334068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499639164.mount: Deactivated successfully. Apr 21 10:09:54.347895 containerd[1641]: time="2026-04-21T10:09:54.345908683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:09:54.347895 containerd[1641]: time="2026-04-21T10:09:54.347246229Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:09:54.348930 containerd[1641]: time="2026-04-21T10:09:54.348881783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:09:54.350188 containerd[1641]: time="2026-04-21T10:09:54.350106069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:09:54.351114 containerd[1641]: time="2026-04-21T10:09:54.351077106Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:09:54.352933 containerd[1641]: time="2026-04-21T10:09:54.352898618Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:09:54.353494 containerd[1641]: time="2026-04-21T10:09:54.353427381Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 21 10:09:54.356349 containerd[1641]: time="2026-04-21T10:09:54.356293951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:09:54.361879 containerd[1641]: time="2026-04-21T10:09:54.361127050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.435573ms" Apr 21 10:09:54.364554 containerd[1641]: time="2026-04-21T10:09:54.364425979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.135774ms" Apr 21 10:09:54.366798 containerd[1641]: time="2026-04-21T10:09:54.366744146Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 560.379839ms" Apr 21 10:09:54.463633 kubelet[2368]: E0421 10:09:54.463593 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://37.27.217.213:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:09:54.500884 containerd[1641]: time="2026-04-21T10:09:54.500636384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:09:54.500884 containerd[1641]: time="2026-04-21T10:09:54.500688082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:09:54.500884 containerd[1641]: time="2026-04-21T10:09:54.500699619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:54.500884 containerd[1641]: time="2026-04-21T10:09:54.500779770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:54.508031 containerd[1641]: time="2026-04-21T10:09:54.507796775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:09:54.508031 containerd[1641]: time="2026-04-21T10:09:54.507881362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:09:54.508031 containerd[1641]: time="2026-04-21T10:09:54.507893520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:54.508031 containerd[1641]: time="2026-04-21T10:09:54.507965829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:54.509391 containerd[1641]: time="2026-04-21T10:09:54.509332008Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:09:54.509430 containerd[1641]: time="2026-04-21T10:09:54.509402564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:09:54.509451 containerd[1641]: time="2026-04-21T10:09:54.509425909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:54.509538 containerd[1641]: time="2026-04-21T10:09:54.509512679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:09:54.581603 containerd[1641]: time="2026-04-21T10:09:54.581556394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-e-553501566f,Uid:4aff57d252410a2aa90fa2327a6c017e,Namespace:kube-system,Attempt:0,} returns sandbox id \"20b318080fc4f399b8a33ecf8862ebbcf2ef4ecb4491ad6769ca1df940895215\"" Apr 21 10:09:54.603749 containerd[1641]: time="2026-04-21T10:09:54.603681502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-e-553501566f,Uid:b313f488886b897fa1bdcbfdcabbe9fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"097aa3f78eae2dabc5cad8c6d46051aeb86374a1ef40fe53f4e41068cd6220bd\"" Apr 21 10:09:54.618117 containerd[1641]: time="2026-04-21T10:09:54.606105158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-e-553501566f,Uid:a72b91f833fcc107081de9a04b2a5359,Namespace:kube-system,Attempt:0,} returns sandbox id \"96577a093155c649a94d8da751b7256694583affd1da281c2d2bbd3634c1bae8\"" Apr 21 10:09:54.664105 containerd[1641]: time="2026-04-21T10:09:54.663215984Z" level=info msg="CreateContainer within sandbox \"20b318080fc4f399b8a33ecf8862ebbcf2ef4ecb4491ad6769ca1df940895215\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:09:54.706427 containerd[1641]: time="2026-04-21T10:09:54.706335663Z" level=info msg="CreateContainer within sandbox \"96577a093155c649a94d8da751b7256694583affd1da281c2d2bbd3634c1bae8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:09:54.708091 containerd[1641]: time="2026-04-21T10:09:54.708053470Z" level=info msg="CreateContainer within sandbox \"097aa3f78eae2dabc5cad8c6d46051aeb86374a1ef40fe53f4e41068cd6220bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:09:54.724446 kubelet[2368]: E0421 10:09:54.724377 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://37.27.217.213:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:09:54.757922 kubelet[2368]: E0421 10:09:54.757772 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.217.213:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-e-553501566f?timeout=10s\": dial tcp 37.27.217.213:6443: connect: connection refused" interval="1.6s" Apr 21 10:09:54.770779 containerd[1641]: time="2026-04-21T10:09:54.770678938Z" level=info msg="CreateContainer within sandbox \"097aa3f78eae2dabc5cad8c6d46051aeb86374a1ef40fe53f4e41068cd6220bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc\"" Apr 21 10:09:54.771504 containerd[1641]: time="2026-04-21T10:09:54.771467231Z" level=info msg="CreateContainer within sandbox \"20b318080fc4f399b8a33ecf8862ebbcf2ef4ecb4491ad6769ca1df940895215\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c571316c2a0176fa09f79d2a8a8932326abad0c6683f362fdfcf1026d27b06d\"" Apr 21 10:09:54.772599 containerd[1641]: time="2026-04-21T10:09:54.772394882Z" level=info msg="StartContainer for \"9c571316c2a0176fa09f79d2a8a8932326abad0c6683f362fdfcf1026d27b06d\"" Apr 21 10:09:54.773035 containerd[1641]: time="2026-04-21T10:09:54.772981202Z" level=info msg="StartContainer for \"ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc\"" Apr 21 10:09:54.779541 containerd[1641]: time="2026-04-21T10:09:54.779462924Z" level=info msg="CreateContainer within sandbox \"96577a093155c649a94d8da751b7256694583affd1da281c2d2bbd3634c1bae8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4\"" Apr 21 10:09:54.780901 containerd[1641]: time="2026-04-21T10:09:54.780776925Z" level=info msg="StartContainer for \"e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4\"" Apr 21 10:09:54.799661 kubelet[2368]: E0421 10:09:54.799618 2368 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://37.27.217.213:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-e-553501566f&limit=500&resourceVersion=0\": dial tcp 37.27.217.213:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:09:54.878405 containerd[1641]: time="2026-04-21T10:09:54.878313790Z" level=info msg="StartContainer for \"9c571316c2a0176fa09f79d2a8a8932326abad0c6683f362fdfcf1026d27b06d\" returns successfully" Apr 21 10:09:54.888194 containerd[1641]: time="2026-04-21T10:09:54.886987691Z" level=info msg="StartContainer for \"ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc\" returns successfully" Apr 21 10:09:54.898297 containerd[1641]: time="2026-04-21T10:09:54.898272118Z" level=info msg="StartContainer for \"e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4\" returns successfully" Apr 21 10:09:54.916459 kubelet[2368]: I0421 10:09:54.916200 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:54.918164 kubelet[2368]: E0421 10:09:54.918127 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.217.213:6443/api/v1/nodes\": dial tcp 37.27.217.213:6443: connect: connection refused" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:55.403034 kubelet[2368]: E0421 10:09:55.402759 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:55.403544 kubelet[2368]: E0421 10:09:55.403529 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:55.404650 kubelet[2368]: E0421 10:09:55.404558 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:56.044086 kubelet[2368]: E0421 10:09:56.043993 2368 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-7-e-553501566f" not found Apr 21 10:09:56.362314 kubelet[2368]: E0421 10:09:56.362139 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:56.406183 kubelet[2368]: E0421 10:09:56.406132 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:56.406899 kubelet[2368]: E0421 10:09:56.406860 2368 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-e-553501566f\" not found" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:56.425318 kubelet[2368]: E0421 10:09:56.425271 2368 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-7-e-553501566f" not found Apr 21 10:09:56.521516 kubelet[2368]: I0421 10:09:56.521450 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:56.534383 kubelet[2368]: I0421 10:09:56.534333 2368 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:56.534383 kubelet[2368]: E0421 10:09:56.534377 2368 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-e-553501566f\": node \"ci-4081-3-7-e-553501566f\" not found" Apr 21 10:09:56.545949 kubelet[2368]: E0421 10:09:56.545892 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-e-553501566f\" not found" Apr 21 10:09:56.552856 kubelet[2368]: I0421 10:09:56.552765 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:56.560941 kubelet[2368]: E0421 10:09:56.560882 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-e-553501566f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:56.560941 kubelet[2368]: I0421 10:09:56.560916 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:56.562993 kubelet[2368]: E0421 10:09:56.562964 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-e-553501566f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:56.562993 kubelet[2368]: I0421 10:09:56.562993 2368 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:56.564755 kubelet[2368]: E0421 10:09:56.564673 2368 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-e-553501566f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:57.326489 kubelet[2368]: I0421 10:09:57.326199 2368 apiserver.go:52] "Watching apiserver" Apr 21 10:09:57.352797 kubelet[2368]: I0421 10:09:57.352705 2368 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:09:57.645446 systemd[1]: Reloading requested from client PID 2651 ('systemctl') (unit session-7.scope)... Apr 21 10:09:57.645474 systemd[1]: Reloading... Apr 21 10:09:57.759060 zram_generator::config[2691]: No configuration found. Apr 21 10:09:57.870717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:09:57.938066 systemd[1]: Reloading finished in 291 ms. Apr 21 10:09:57.978085 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:57.997914 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:09:57.998253 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:58.005300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:58.138972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:58.145579 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:09:58.188919 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:09:58.188919 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:09:58.188919 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:09:58.188919 kubelet[2751]: I0421 10:09:58.187500 2751 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:09:58.192604 kubelet[2751]: I0421 10:09:58.192588 2751 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:09:58.192716 kubelet[2751]: I0421 10:09:58.192708 2751 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:09:58.192880 kubelet[2751]: I0421 10:09:58.192868 2751 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:09:58.193692 kubelet[2751]: I0421 10:09:58.193680 2751 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:09:58.195179 kubelet[2751]: I0421 10:09:58.195162 2751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:09:58.197166 kubelet[2751]: E0421 10:09:58.197140 2751 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:09:58.197166 kubelet[2751]: I0421 10:09:58.197158 2751 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:09:58.199960 kubelet[2751]: I0421 10:09:58.199944 2751 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:09:58.200353 kubelet[2751]: I0421 10:09:58.200330 2751 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:09:58.200458 kubelet[2751]: I0421 10:09:58.200350 2751 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-e-553501566f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:09:58.200547 kubelet[2751]: I0421 10:09:58.200460 2751 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:09:58.200547 kubelet[2751]: I0421 10:09:58.200467 2751 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:09:58.200547 kubelet[2751]: I0421 10:09:58.200503 2751 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:09:58.200717 kubelet[2751]: I0421 10:09:58.200703 2751 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:09:58.200736 kubelet[2751]: I0421 10:09:58.200718 2751 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:09:58.200765 kubelet[2751]: I0421 10:09:58.200738 2751 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:09:58.200765 kubelet[2751]: I0421 10:09:58.200749 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:09:58.206907 kubelet[2751]: I0421 10:09:58.206893 2751 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:09:58.208694 kubelet[2751]: I0421 10:09:58.207294 2751 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:09:58.209617 kubelet[2751]: I0421 10:09:58.209608 2751 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:09:58.209688 kubelet[2751]: I0421 10:09:58.209683 2751 server.go:1289] "Started kubelet" Apr 21 10:09:58.211404 kubelet[2751]: I0421 10:09:58.211393 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:09:58.212143 kubelet[2751]: I0421 10:09:58.212126 2751 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:09:58.212970 kubelet[2751]: I0421 10:09:58.212955 2751 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:09:58.215583 kubelet[2751]: I0421 10:09:58.215547 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:09:58.215731 kubelet[2751]: I0421 10:09:58.215716 2751 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:09:58.216854 kubelet[2751]: I0421 10:09:58.216821 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:09:58.220644 kubelet[2751]: I0421 10:09:58.220627 2751 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:09:58.220716 kubelet[2751]: I0421 10:09:58.220704 2751 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:09:58.220864 kubelet[2751]: I0421 10:09:58.220817 2751 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:09:58.222361 kubelet[2751]: I0421 10:09:58.222324 2751 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:09:58.224760 kubelet[2751]: I0421 10:09:58.224733 2751 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:09:58.224760 kubelet[2751]: I0421 10:09:58.224745 2751 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:09:58.225010 kubelet[2751]: E0421 10:09:58.224998 2751 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:09:58.233925 kubelet[2751]: I0421 10:09:58.233895 2751 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:09:58.235820 kubelet[2751]: I0421 10:09:58.235806 2751 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:09:58.235820 kubelet[2751]: I0421 10:09:58.235821 2751 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:09:58.235895 kubelet[2751]: I0421 10:09:58.235854 2751 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:09:58.235895 kubelet[2751]: I0421 10:09:58.235861 2751 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:09:58.235930 kubelet[2751]: E0421 10:09:58.235895 2751 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:09:58.278521 kubelet[2751]: I0421 10:09:58.278490 2751 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:09:58.278670 kubelet[2751]: I0421 10:09:58.278658 2751 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:09:58.278713 kubelet[2751]: I0421 10:09:58.278707 2751 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:09:58.278871 kubelet[2751]: I0421 10:09:58.278862 2751 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:09:58.278919 kubelet[2751]: I0421 10:09:58.278906 2751 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:09:58.278949 kubelet[2751]: I0421 10:09:58.278944 2751 policy_none.go:49] "None policy: Start" Apr 21 10:09:58.278988 kubelet[2751]: I0421 10:09:58.278982 2751 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:09:58.279035 kubelet[2751]: I0421 10:09:58.279021 2751 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:09:58.279126 kubelet[2751]: I0421 10:09:58.279119 2751 state_mem.go:75] "Updated machine memory state" Apr 21 10:09:58.280382 kubelet[2751]: E0421 10:09:58.280370 2751 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:09:58.280557 kubelet[2751]: I0421 10:09:58.280548 2751 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:09:58.280606 kubelet[2751]: I0421 10:09:58.280590 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:09:58.281909 kubelet[2751]: I0421 10:09:58.281899 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:09:58.283096 kubelet[2751]: E0421 10:09:58.283072 2751 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:09:58.336776 kubelet[2751]: I0421 10:09:58.336580 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.336776 kubelet[2751]: I0421 10:09:58.336767 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.336929 kubelet[2751]: I0421 10:09:58.336584 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.386255 kubelet[2751]: I0421 10:09:58.386212 2751 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:58.394607 kubelet[2751]: I0421 10:09:58.394531 2751 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:58.394860 kubelet[2751]: I0421 10:09:58.394653 2751 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421198 kubelet[2751]: I0421 10:09:58.421138 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a72b91f833fcc107081de9a04b2a5359-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-e-553501566f\" (UID: \"a72b91f833fcc107081de9a04b2a5359\") " pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421198 kubelet[2751]: I0421 10:09:58.421192 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4aff57d252410a2aa90fa2327a6c017e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-e-553501566f\" (UID: \"4aff57d252410a2aa90fa2327a6c017e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421387 kubelet[2751]: I0421 10:09:58.421243 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421387 kubelet[2751]: I0421 10:09:58.421281 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421387 kubelet[2751]: I0421 10:09:58.421306 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4aff57d252410a2aa90fa2327a6c017e-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-e-553501566f\" (UID: \"4aff57d252410a2aa90fa2327a6c017e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421387 kubelet[2751]: I0421 10:09:58.421328 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4aff57d252410a2aa90fa2327a6c017e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-e-553501566f\" (UID: \"4aff57d252410a2aa90fa2327a6c017e\") " pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421571 kubelet[2751]: I0421 10:09:58.421515 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421616 kubelet[2751]: I0421 10:09:58.421593 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:58.421886 kubelet[2751]: I0421 10:09:58.421824 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b313f488886b897fa1bdcbfdcabbe9fd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-e-553501566f\" (UID: \"b313f488886b897fa1bdcbfdcabbe9fd\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" Apr 21 10:09:59.201683 kubelet[2751]: I0421 10:09:59.201494 2751 apiserver.go:52] "Watching apiserver" Apr 21 10:09:59.221235 kubelet[2751]: I0421 10:09:59.221169 2751 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:09:59.253159 kubelet[2751]: I0421 10:09:59.251522 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:59.253472 kubelet[2751]: I0421 10:09:59.253383 2751 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:59.259062 kubelet[2751]: E0421 10:09:59.259035 2751 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-e-553501566f\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" Apr 21 10:09:59.260298 kubelet[2751]: E0421 10:09:59.260280 2751 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-e-553501566f\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" Apr 21 10:09:59.271561 kubelet[2751]: I0421 10:09:59.271446 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-e-553501566f" podStartSLOduration=1.271434481 podStartE2EDuration="1.271434481s" podCreationTimestamp="2026-04-21 10:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:09:59.271019488 +0000 UTC m=+1.119117346" watchObservedRunningTime="2026-04-21 10:09:59.271434481 +0000 UTC m=+1.119532339" Apr 21 10:09:59.279504 kubelet[2751]: I0421 10:09:59.279217 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-e-553501566f" podStartSLOduration=1.27920759 podStartE2EDuration="1.27920759s" podCreationTimestamp="2026-04-21 10:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:09:59.27792704 +0000 UTC m=+1.126024898" watchObservedRunningTime="2026-04-21 10:09:59.27920759 +0000 UTC m=+1.127305448" Apr 21 10:09:59.286547 kubelet[2751]: I0421 10:09:59.286371 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-e-553501566f" podStartSLOduration=1.286362823 podStartE2EDuration="1.286362823s" podCreationTimestamp="2026-04-21 10:09:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:09:59.286152057 +0000 UTC m=+1.134249915" watchObservedRunningTime="2026-04-21 10:09:59.286362823 +0000 UTC m=+1.134460671" Apr 21 10:10:02.467985 kubelet[2751]: I0421 10:10:02.467952 2751 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:10:02.469645 containerd[1641]: time="2026-04-21T10:10:02.469043303Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:10:02.470316 kubelet[2751]: I0421 10:10:02.469333 2751 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:10:03.153443 kubelet[2751]: I0421 10:10:03.153387 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae1cece1-5a45-4179-8b4f-d7295be66629-kube-proxy\") pod \"kube-proxy-pzhtc\" (UID: \"ae1cece1-5a45-4179-8b4f-d7295be66629\") " pod="kube-system/kube-proxy-pzhtc" Apr 21 10:10:03.153443 kubelet[2751]: I0421 10:10:03.153427 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae1cece1-5a45-4179-8b4f-d7295be66629-xtables-lock\") pod \"kube-proxy-pzhtc\" (UID: \"ae1cece1-5a45-4179-8b4f-d7295be66629\") " pod="kube-system/kube-proxy-pzhtc" Apr 21 10:10:03.153443 kubelet[2751]: I0421 10:10:03.153448 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xfh\" (UniqueName: \"kubernetes.io/projected/ae1cece1-5a45-4179-8b4f-d7295be66629-kube-api-access-f6xfh\") pod \"kube-proxy-pzhtc\" (UID: \"ae1cece1-5a45-4179-8b4f-d7295be66629\") " pod="kube-system/kube-proxy-pzhtc" Apr 21 10:10:03.153443 kubelet[2751]: I0421 10:10:03.153471 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae1cece1-5a45-4179-8b4f-d7295be66629-lib-modules\") pod \"kube-proxy-pzhtc\" (UID: \"ae1cece1-5a45-4179-8b4f-d7295be66629\") " pod="kube-system/kube-proxy-pzhtc" Apr 21 10:10:03.261589 kubelet[2751]: E0421 10:10:03.261547 2751 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 21 10:10:03.261589 kubelet[2751]: E0421 10:10:03.261577 2751 projected.go:194] Error preparing data for projected volume kube-api-access-f6xfh for pod kube-system/kube-proxy-pzhtc: configmap "kube-root-ca.crt" not found Apr 21 10:10:03.262055 kubelet[2751]: E0421 10:10:03.261645 2751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ae1cece1-5a45-4179-8b4f-d7295be66629-kube-api-access-f6xfh podName:ae1cece1-5a45-4179-8b4f-d7295be66629 nodeName:}" failed. No retries permitted until 2026-04-21 10:10:03.761624475 +0000 UTC m=+5.609722333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f6xfh" (UniqueName: "kubernetes.io/projected/ae1cece1-5a45-4179-8b4f-d7295be66629-kube-api-access-f6xfh") pod "kube-proxy-pzhtc" (UID: "ae1cece1-5a45-4179-8b4f-d7295be66629") : configmap "kube-root-ca.crt" not found Apr 21 10:10:03.758697 kubelet[2751]: I0421 10:10:03.758537 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c62d8d36-bce1-47f8-990e-c7aa31d36aa6-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-8654l\" (UID: \"c62d8d36-bce1-47f8-990e-c7aa31d36aa6\") " pod="tigera-operator/tigera-operator-6bf85f8dd-8654l" Apr 21 10:10:03.758697 kubelet[2751]: I0421 10:10:03.758593 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75gjh\" (UniqueName: \"kubernetes.io/projected/c62d8d36-bce1-47f8-990e-c7aa31d36aa6-kube-api-access-75gjh\") pod \"tigera-operator-6bf85f8dd-8654l\" (UID: \"c62d8d36-bce1-47f8-990e-c7aa31d36aa6\") " pod="tigera-operator/tigera-operator-6bf85f8dd-8654l" Apr 21 10:10:04.004463 containerd[1641]: time="2026-04-21T10:10:04.004422476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-8654l,Uid:c62d8d36-bce1-47f8-990e-c7aa31d36aa6,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:10:04.009367 containerd[1641]: time="2026-04-21T10:10:04.009292563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzhtc,Uid:ae1cece1-5a45-4179-8b4f-d7295be66629,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:04.029113 containerd[1641]: time="2026-04-21T10:10:04.029006894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:04.029113 containerd[1641]: time="2026-04-21T10:10:04.029056900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:04.029113 containerd[1641]: time="2026-04-21T10:10:04.029067877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:04.029248 containerd[1641]: time="2026-04-21T10:10:04.029145794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:04.052385 containerd[1641]: time="2026-04-21T10:10:04.052145946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:04.052385 containerd[1641]: time="2026-04-21T10:10:04.052185777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:04.052385 containerd[1641]: time="2026-04-21T10:10:04.052196193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:04.052385 containerd[1641]: time="2026-04-21T10:10:04.052264266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:04.095980 containerd[1641]: time="2026-04-21T10:10:04.095943018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzhtc,Uid:ae1cece1-5a45-4179-8b4f-d7295be66629,Namespace:kube-system,Attempt:0,} returns sandbox id \"3eee1e675a82ed0b3ea3dc12bd1fd62a23e08b5380bf4687ff9cf602455fb936\"" Apr 21 10:10:04.100294 containerd[1641]: time="2026-04-21T10:10:04.100097312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-8654l,Uid:c62d8d36-bce1-47f8-990e-c7aa31d36aa6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"585657cebccea574a7b0ab5ecfd18a22cc9dc8518c42f03983d6e5a55505b330\"" Apr 21 10:10:04.101998 containerd[1641]: time="2026-04-21T10:10:04.101770102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:10:04.103554 containerd[1641]: time="2026-04-21T10:10:04.103470475Z" level=info msg="CreateContainer within sandbox \"3eee1e675a82ed0b3ea3dc12bd1fd62a23e08b5380bf4687ff9cf602455fb936\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:10:04.126339 containerd[1641]: time="2026-04-21T10:10:04.125910279Z" level=info msg="CreateContainer within sandbox \"3eee1e675a82ed0b3ea3dc12bd1fd62a23e08b5380bf4687ff9cf602455fb936\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0f35f5620d5d0cb25f22832ad2e5b6b9c6e899f2af9c787fa8b7a4cca3d7364\"" Apr 21 10:10:04.126630 containerd[1641]: time="2026-04-21T10:10:04.126613653Z" level=info msg="StartContainer for \"d0f35f5620d5d0cb25f22832ad2e5b6b9c6e899f2af9c787fa8b7a4cca3d7364\"" Apr 21 10:10:04.180568 containerd[1641]: time="2026-04-21T10:10:04.180511855Z" level=info msg="StartContainer for \"d0f35f5620d5d0cb25f22832ad2e5b6b9c6e899f2af9c787fa8b7a4cca3d7364\" returns successfully" Apr 21 10:10:04.272310 kubelet[2751]: I0421 10:10:04.272193 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pzhtc" podStartSLOduration=1.272178948 podStartE2EDuration="1.272178948s" podCreationTimestamp="2026-04-21 10:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:04.270508642 +0000 UTC m=+6.118606500" watchObservedRunningTime="2026-04-21 10:10:04.272178948 +0000 UTC m=+6.120276797" Apr 21 10:10:05.982765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4088803662.mount: Deactivated successfully. Apr 21 10:10:06.709487 containerd[1641]: time="2026-04-21T10:10:06.709441484Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:06.710678 containerd[1641]: time="2026-04-21T10:10:06.710638444Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:10:06.714226 containerd[1641]: time="2026-04-21T10:10:06.714190661Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:06.716294 containerd[1641]: time="2026-04-21T10:10:06.716278106Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:06.717326 containerd[1641]: time="2026-04-21T10:10:06.716793195Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.614999567s" Apr 21 10:10:06.717326 containerd[1641]: time="2026-04-21T10:10:06.716827486Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:10:06.720226 containerd[1641]: time="2026-04-21T10:10:06.720199873Z" level=info msg="CreateContainer within sandbox \"585657cebccea574a7b0ab5ecfd18a22cc9dc8518c42f03983d6e5a55505b330\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:10:06.730508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222346567.mount: Deactivated successfully. Apr 21 10:10:06.738172 containerd[1641]: time="2026-04-21T10:10:06.738108145Z" level=info msg="CreateContainer within sandbox \"585657cebccea574a7b0ab5ecfd18a22cc9dc8518c42f03983d6e5a55505b330\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214\"" Apr 21 10:10:06.738588 containerd[1641]: time="2026-04-21T10:10:06.738558425Z" level=info msg="StartContainer for \"846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214\"" Apr 21 10:10:06.786375 containerd[1641]: time="2026-04-21T10:10:06.786312048Z" level=info msg="StartContainer for \"846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214\" returns successfully" Apr 21 10:10:11.856789 sudo[1861]: pam_unix(sudo:session): session closed for user root Apr 21 10:10:11.888413 sshd[1857]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:11.897419 systemd-logind[1622]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:10:11.899125 systemd[1]: sshd@6-37.27.217.213:22-50.85.169.122:44880.service: Deactivated successfully. Apr 21 10:10:11.907476 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:10:11.910458 systemd-logind[1622]: Removed session 7. Apr 21 10:10:12.166318 kubelet[2751]: I0421 10:10:12.166216 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-8654l" podStartSLOduration=6.54996036 podStartE2EDuration="9.166203076s" podCreationTimestamp="2026-04-21 10:10:03 +0000 UTC" firstStartedPulling="2026-04-21 10:10:04.101129263 +0000 UTC m=+5.949227121" lastFinishedPulling="2026-04-21 10:10:06.717371989 +0000 UTC m=+8.565469837" observedRunningTime="2026-04-21 10:10:07.282376321 +0000 UTC m=+9.130474209" watchObservedRunningTime="2026-04-21 10:10:12.166203076 +0000 UTC m=+14.014300924" Apr 21 10:10:13.723495 kubelet[2751]: I0421 10:10:13.723215 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3bb716-a1b2-4c42-9be0-0c6d090f345c-tigera-ca-bundle\") pod \"calico-typha-864985f99d-mbrmm\" (UID: \"ed3bb716-a1b2-4c42-9be0-0c6d090f345c\") " pod="calico-system/calico-typha-864985f99d-mbrmm" Apr 21 10:10:13.725563 kubelet[2751]: I0421 10:10:13.725182 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed3bb716-a1b2-4c42-9be0-0c6d090f345c-typha-certs\") pod \"calico-typha-864985f99d-mbrmm\" (UID: \"ed3bb716-a1b2-4c42-9be0-0c6d090f345c\") " pod="calico-system/calico-typha-864985f99d-mbrmm" Apr 21 10:10:13.725563 kubelet[2751]: I0421 10:10:13.725203 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9th\" (UniqueName: \"kubernetes.io/projected/ed3bb716-a1b2-4c42-9be0-0c6d090f345c-kube-api-access-ft9th\") pod \"calico-typha-864985f99d-mbrmm\" (UID: \"ed3bb716-a1b2-4c42-9be0-0c6d090f345c\") " pod="calico-system/calico-typha-864985f99d-mbrmm" Apr 21 10:10:13.826924 kubelet[2751]: I0421 10:10:13.825898 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwrwb\" (UniqueName: \"kubernetes.io/projected/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-kube-api-access-xwrwb\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.826924 kubelet[2751]: I0421 10:10:13.825935 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-cni-net-dir\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.826924 kubelet[2751]: I0421 10:10:13.825948 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-lib-modules\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.826924 kubelet[2751]: I0421 10:10:13.825962 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-var-lib-calico\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.826924 kubelet[2751]: I0421 10:10:13.825972 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-xtables-lock\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827093 kubelet[2751]: I0421 10:10:13.825983 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-cni-log-dir\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827093 kubelet[2751]: I0421 10:10:13.825995 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-tigera-ca-bundle\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827093 kubelet[2751]: I0421 10:10:13.826007 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-var-run-calico\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827093 kubelet[2751]: I0421 10:10:13.826016 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-nodeproc\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827093 kubelet[2751]: I0421 10:10:13.826027 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-sys-fs\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827167 kubelet[2751]: I0421 10:10:13.826037 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-bpffs\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827167 kubelet[2751]: I0421 10:10:13.826048 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-cni-bin-dir\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827167 kubelet[2751]: I0421 10:10:13.826057 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-policysync\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827167 kubelet[2751]: I0421 10:10:13.826067 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-node-certs\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.827167 kubelet[2751]: I0421 10:10:13.826087 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0b1bd8f8-7990-4fcf-8021-13b8e63369a3-flexvol-driver-host\") pod \"calico-node-sw6qg\" (UID: \"0b1bd8f8-7990-4fcf-8021-13b8e63369a3\") " pod="calico-system/calico-node-sw6qg" Apr 21 10:10:13.912455 kubelet[2751]: E0421 10:10:13.912351 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:13.927108 kubelet[2751]: I0421 10:10:13.927080 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88p9s\" (UniqueName: \"kubernetes.io/projected/83f3b362-fdfe-49c7-9fa1-a2d602501612-kube-api-access-88p9s\") pod \"csi-node-driver-7trjk\" (UID: \"83f3b362-fdfe-49c7-9fa1-a2d602501612\") " pod="calico-system/csi-node-driver-7trjk" Apr 21 10:10:13.929157 kubelet[2751]: I0421 10:10:13.928049 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83f3b362-fdfe-49c7-9fa1-a2d602501612-kubelet-dir\") pod \"csi-node-driver-7trjk\" (UID: \"83f3b362-fdfe-49c7-9fa1-a2d602501612\") " pod="calico-system/csi-node-driver-7trjk" Apr 21 10:10:13.929157 kubelet[2751]: I0421 10:10:13.928097 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/83f3b362-fdfe-49c7-9fa1-a2d602501612-socket-dir\") pod \"csi-node-driver-7trjk\" (UID: \"83f3b362-fdfe-49c7-9fa1-a2d602501612\") " pod="calico-system/csi-node-driver-7trjk" Apr 21 10:10:13.929157 kubelet[2751]: I0421 10:10:13.928131 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/83f3b362-fdfe-49c7-9fa1-a2d602501612-varrun\") pod \"csi-node-driver-7trjk\" (UID: \"83f3b362-fdfe-49c7-9fa1-a2d602501612\") " pod="calico-system/csi-node-driver-7trjk" Apr 21 10:10:13.929157 kubelet[2751]: I0421 10:10:13.928171 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/83f3b362-fdfe-49c7-9fa1-a2d602501612-registration-dir\") pod \"csi-node-driver-7trjk\" (UID: \"83f3b362-fdfe-49c7-9fa1-a2d602501612\") " pod="calico-system/csi-node-driver-7trjk" Apr 21 10:10:13.936523 kubelet[2751]: E0421 10:10:13.936507 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.936616 kubelet[2751]: W0421 10:10:13.936607 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.936657 kubelet[2751]: E0421 10:10:13.936648 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.937671 kubelet[2751]: E0421 10:10:13.936887 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.937754 kubelet[2751]: W0421 10:10:13.937743 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.937807 kubelet[2751]: E0421 10:10:13.937798 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.939025 kubelet[2751]: E0421 10:10:13.939015 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.939737 kubelet[2751]: W0421 10:10:13.939725 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.939803 kubelet[2751]: E0421 10:10:13.939795 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.941199 kubelet[2751]: E0421 10:10:13.941189 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.941335 kubelet[2751]: W0421 10:10:13.941245 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.941335 kubelet[2751]: E0421 10:10:13.941255 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.941936 kubelet[2751]: E0421 10:10:13.941911 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.941936 kubelet[2751]: W0421 10:10:13.941919 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.941936 kubelet[2751]: E0421 10:10:13.941927 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.942273 kubelet[2751]: E0421 10:10:13.942208 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.942273 kubelet[2751]: W0421 10:10:13.942216 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.942273 kubelet[2751]: E0421 10:10:13.942223 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.943851 kubelet[2751]: E0421 10:10:13.943741 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.943851 kubelet[2751]: W0421 10:10:13.943750 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.943851 kubelet[2751]: E0421 10:10:13.943768 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.944182 kubelet[2751]: E0421 10:10:13.944060 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.944182 kubelet[2751]: W0421 10:10:13.944069 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.944182 kubelet[2751]: E0421 10:10:13.944076 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.944517 kubelet[2751]: E0421 10:10:13.944442 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.944517 kubelet[2751]: W0421 10:10:13.944450 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.944517 kubelet[2751]: E0421 10:10:13.944457 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.945144 kubelet[2751]: E0421 10:10:13.945068 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.945144 kubelet[2751]: W0421 10:10:13.945076 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.945144 kubelet[2751]: E0421 10:10:13.945083 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.945330 kubelet[2751]: E0421 10:10:13.945322 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.945442 kubelet[2751]: W0421 10:10:13.945355 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.945442 kubelet[2751]: E0421 10:10:13.945363 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.946165 kubelet[2751]: E0421 10:10:13.946155 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.946289 kubelet[2751]: W0421 10:10:13.946209 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.946289 kubelet[2751]: E0421 10:10:13.946219 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.946739 kubelet[2751]: E0421 10:10:13.946711 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.947074 kubelet[2751]: W0421 10:10:13.947025 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.947074 kubelet[2751]: E0421 10:10:13.947037 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.948379 kubelet[2751]: E0421 10:10:13.948299 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.948379 kubelet[2751]: W0421 10:10:13.948308 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.948379 kubelet[2751]: E0421 10:10:13.948315 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.949062 kubelet[2751]: E0421 10:10:13.949037 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.949062 kubelet[2751]: W0421 10:10:13.949045 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.949062 kubelet[2751]: E0421 10:10:13.949053 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.949381 kubelet[2751]: E0421 10:10:13.949326 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.949381 kubelet[2751]: W0421 10:10:13.949334 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.949381 kubelet[2751]: E0421 10:10:13.949340 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.949634 kubelet[2751]: E0421 10:10:13.949602 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.949634 kubelet[2751]: W0421 10:10:13.949610 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.949634 kubelet[2751]: E0421 10:10:13.949617 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.950934 kubelet[2751]: E0421 10:10:13.950924 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.950992 kubelet[2751]: W0421 10:10:13.950984 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.951040 kubelet[2751]: E0421 10:10:13.951021 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.951297 kubelet[2751]: E0421 10:10:13.951273 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.951297 kubelet[2751]: W0421 10:10:13.951281 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.951297 kubelet[2751]: E0421 10:10:13.951288 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.952877 kubelet[2751]: E0421 10:10:13.952023 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.952877 kubelet[2751]: W0421 10:10:13.952032 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.952877 kubelet[2751]: E0421 10:10:13.952040 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.953190 kubelet[2751]: E0421 10:10:13.953181 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.953242 kubelet[2751]: W0421 10:10:13.953225 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.953242 kubelet[2751]: E0421 10:10:13.953233 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.954105 kubelet[2751]: E0421 10:10:13.953986 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.954105 kubelet[2751]: W0421 10:10:13.953995 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.954105 kubelet[2751]: E0421 10:10:13.954002 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.954391 kubelet[2751]: E0421 10:10:13.954366 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.954391 kubelet[2751]: W0421 10:10:13.954374 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.954391 kubelet[2751]: E0421 10:10:13.954382 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.954680 kubelet[2751]: E0421 10:10:13.954650 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.954680 kubelet[2751]: W0421 10:10:13.954657 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.954680 kubelet[2751]: E0421 10:10:13.954663 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.955017 kubelet[2751]: E0421 10:10:13.955009 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.955059 kubelet[2751]: W0421 10:10:13.955052 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.955087 kubelet[2751]: E0421 10:10:13.955080 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.955769 kubelet[2751]: E0421 10:10:13.955669 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.955769 kubelet[2751]: W0421 10:10:13.955678 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.955769 kubelet[2751]: E0421 10:10:13.955684 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.956423 kubelet[2751]: E0421 10:10:13.956414 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.956553 kubelet[2751]: W0421 10:10:13.956460 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.956553 kubelet[2751]: E0421 10:10:13.956468 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.956788 kubelet[2751]: E0421 10:10:13.956780 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.956910 kubelet[2751]: W0421 10:10:13.956815 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.956910 kubelet[2751]: E0421 10:10:13.956824 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.957143 kubelet[2751]: E0421 10:10:13.957134 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.957224 kubelet[2751]: W0421 10:10:13.957204 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.957224 kubelet[2751]: E0421 10:10:13.957215 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.957614 kubelet[2751]: E0421 10:10:13.957583 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.957614 kubelet[2751]: W0421 10:10:13.957591 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.957614 kubelet[2751]: E0421 10:10:13.957597 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.957936 kubelet[2751]: E0421 10:10:13.957927 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.957973 kubelet[2751]: W0421 10:10:13.957966 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.958041 kubelet[2751]: E0421 10:10:13.957995 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.958327 kubelet[2751]: E0421 10:10:13.958278 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.958327 kubelet[2751]: W0421 10:10:13.958287 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.958327 kubelet[2751]: E0421 10:10:13.958293 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.958642 kubelet[2751]: E0421 10:10:13.958614 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:13.958642 kubelet[2751]: W0421 10:10:13.958622 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:13.958642 kubelet[2751]: E0421 10:10:13.958628 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:13.983376 containerd[1641]: time="2026-04-21T10:10:13.983284424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864985f99d-mbrmm,Uid:ed3bb716-a1b2-4c42-9be0-0c6d090f345c,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:14.009216 containerd[1641]: time="2026-04-21T10:10:14.009085425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:14.009216 containerd[1641]: time="2026-04-21T10:10:14.009185826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:14.009727 containerd[1641]: time="2026-04-21T10:10:14.009684407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:14.009873 containerd[1641]: time="2026-04-21T10:10:14.009790056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:14.029444 kubelet[2751]: E0421 10:10:14.029304 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.029444 kubelet[2751]: W0421 10:10:14.029320 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.029444 kubelet[2751]: E0421 10:10:14.029335 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.029787 kubelet[2751]: E0421 10:10:14.029675 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.029787 kubelet[2751]: W0421 10:10:14.029702 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.029787 kubelet[2751]: E0421 10:10:14.029711 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.030167 kubelet[2751]: E0421 10:10:14.030157 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.030222 kubelet[2751]: W0421 10:10:14.030215 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.030278 kubelet[2751]: E0421 10:10:14.030245 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.030610 kubelet[2751]: E0421 10:10:14.030577 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.030610 kubelet[2751]: W0421 10:10:14.030588 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.030610 kubelet[2751]: E0421 10:10:14.030595 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.031060 kubelet[2751]: E0421 10:10:14.031017 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.031060 kubelet[2751]: W0421 10:10:14.031035 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.031060 kubelet[2751]: E0421 10:10:14.031043 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.031404 kubelet[2751]: E0421 10:10:14.031396 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.031510 kubelet[2751]: W0421 10:10:14.031442 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.031510 kubelet[2751]: E0421 10:10:14.031471 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.031917 kubelet[2751]: E0421 10:10:14.031784 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.031917 kubelet[2751]: W0421 10:10:14.031810 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.031917 kubelet[2751]: E0421 10:10:14.031816 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.032262 kubelet[2751]: E0421 10:10:14.032183 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.032262 kubelet[2751]: W0421 10:10:14.032190 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.032262 kubelet[2751]: E0421 10:10:14.032196 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.032613 kubelet[2751]: E0421 10:10:14.032568 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.032613 kubelet[2751]: W0421 10:10:14.032598 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.032613 kubelet[2751]: E0421 10:10:14.032604 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.032995 kubelet[2751]: E0421 10:10:14.032987 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.033095 kubelet[2751]: W0421 10:10:14.033047 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.033095 kubelet[2751]: E0421 10:10:14.033055 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.033405 kubelet[2751]: E0421 10:10:14.033360 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.033405 kubelet[2751]: W0421 10:10:14.033368 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.033405 kubelet[2751]: E0421 10:10:14.033375 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.033806 kubelet[2751]: E0421 10:10:14.033710 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.033806 kubelet[2751]: W0421 10:10:14.033718 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.033806 kubelet[2751]: E0421 10:10:14.033724 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.034075 kubelet[2751]: E0421 10:10:14.034015 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.034075 kubelet[2751]: W0421 10:10:14.034023 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.034075 kubelet[2751]: E0421 10:10:14.034031 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.035888 kubelet[2751]: E0421 10:10:14.035824 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.035957 kubelet[2751]: W0421 10:10:14.035934 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.035957 kubelet[2751]: E0421 10:10:14.035943 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.036306 kubelet[2751]: E0421 10:10:14.036207 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.036306 kubelet[2751]: W0421 10:10:14.036214 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.036306 kubelet[2751]: E0421 10:10:14.036220 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.036983 kubelet[2751]: E0421 10:10:14.036972 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.037106 kubelet[2751]: W0421 10:10:14.037039 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.037106 kubelet[2751]: E0421 10:10:14.037049 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.037299 kubelet[2751]: E0421 10:10:14.037291 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.037402 kubelet[2751]: W0421 10:10:14.037335 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.037402 kubelet[2751]: E0421 10:10:14.037343 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.038994 kubelet[2751]: E0421 10:10:14.038983 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.039071 kubelet[2751]: W0421 10:10:14.039043 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.039071 kubelet[2751]: E0421 10:10:14.039052 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.039360 kubelet[2751]: E0421 10:10:14.039335 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.039360 kubelet[2751]: W0421 10:10:14.039343 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.039360 kubelet[2751]: E0421 10:10:14.039352 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.039704 kubelet[2751]: E0421 10:10:14.039628 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.039704 kubelet[2751]: W0421 10:10:14.039636 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.039704 kubelet[2751]: E0421 10:10:14.039642 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.039872 kubelet[2751]: E0421 10:10:14.039865 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.039921 kubelet[2751]: W0421 10:10:14.039897 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.039921 kubelet[2751]: E0421 10:10:14.039906 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.040208 kubelet[2751]: E0421 10:10:14.040185 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.040208 kubelet[2751]: W0421 10:10:14.040192 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.040208 kubelet[2751]: E0421 10:10:14.040199 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.040739 kubelet[2751]: E0421 10:10:14.040598 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.040739 kubelet[2751]: W0421 10:10:14.040606 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.040739 kubelet[2751]: E0421 10:10:14.040614 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.041160 kubelet[2751]: E0421 10:10:14.041129 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.041198 kubelet[2751]: W0421 10:10:14.041191 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.041251 kubelet[2751]: E0421 10:10:14.041244 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.041522 kubelet[2751]: E0421 10:10:14.041514 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.041586 kubelet[2751]: W0421 10:10:14.041552 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.041586 kubelet[2751]: E0421 10:10:14.041573 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.048115 kubelet[2751]: E0421 10:10:14.048074 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:14.048115 kubelet[2751]: W0421 10:10:14.048085 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:14.048115 kubelet[2751]: E0421 10:10:14.048094 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:14.057885 containerd[1641]: time="2026-04-21T10:10:14.057810827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864985f99d-mbrmm,Uid:ed3bb716-a1b2-4c42-9be0-0c6d090f345c,Namespace:calico-system,Attempt:0,} returns sandbox id \"361900ad4323a02f5c229dc908e4f1121e8677220d553a5cefc907bc8783f0d1\"" Apr 21 10:10:14.059720 containerd[1641]: time="2026-04-21T10:10:14.059123575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:10:14.113188 containerd[1641]: time="2026-04-21T10:10:14.113122253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw6qg,Uid:0b1bd8f8-7990-4fcf-8021-13b8e63369a3,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:14.147038 containerd[1641]: time="2026-04-21T10:10:14.146938572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:14.147038 containerd[1641]: time="2026-04-21T10:10:14.146988918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:14.147038 containerd[1641]: time="2026-04-21T10:10:14.147000436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:14.147860 containerd[1641]: time="2026-04-21T10:10:14.147412185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:14.181438 containerd[1641]: time="2026-04-21T10:10:14.181409888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sw6qg,Uid:0b1bd8f8-7990-4fcf-8021-13b8e63369a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\"" Apr 21 10:10:16.198260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059183877.mount: Deactivated successfully. Apr 21 10:10:16.237143 kubelet[2751]: E0421 10:10:16.237091 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:16.787965 update_engine[1623]: I20260421 10:10:16.787890 1623 update_attempter.cc:509] Updating boot flags... Apr 21 10:10:16.834925 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3309) Apr 21 10:10:16.886907 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (3313) Apr 21 10:10:17.590229 containerd[1641]: time="2026-04-21T10:10:17.590181846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:17.591191 containerd[1641]: time="2026-04-21T10:10:17.591048200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:10:17.592904 containerd[1641]: time="2026-04-21T10:10:17.591932201Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:17.594150 containerd[1641]: time="2026-04-21T10:10:17.593759292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:17.594389 containerd[1641]: time="2026-04-21T10:10:17.594283510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.535137801s" Apr 21 10:10:17.594389 containerd[1641]: time="2026-04-21T10:10:17.594305023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:10:17.597295 containerd[1641]: time="2026-04-21T10:10:17.596533809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:10:17.607239 containerd[1641]: time="2026-04-21T10:10:17.607205654Z" level=info msg="CreateContainer within sandbox \"361900ad4323a02f5c229dc908e4f1121e8677220d553a5cefc907bc8783f0d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:10:17.622593 containerd[1641]: time="2026-04-21T10:10:17.622548193Z" level=info msg="CreateContainer within sandbox \"361900ad4323a02f5c229dc908e4f1121e8677220d553a5cefc907bc8783f0d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f4e63aec23a1da0abf1d1af6b0700ec6d4a68082ebc89a2370bfcca8f5803053\"" Apr 21 10:10:17.623131 containerd[1641]: time="2026-04-21T10:10:17.623109999Z" level=info msg="StartContainer for \"f4e63aec23a1da0abf1d1af6b0700ec6d4a68082ebc89a2370bfcca8f5803053\"" Apr 21 10:10:17.683696 containerd[1641]: time="2026-04-21T10:10:17.683411021Z" level=info msg="StartContainer for \"f4e63aec23a1da0abf1d1af6b0700ec6d4a68082ebc89a2370bfcca8f5803053\" returns successfully" Apr 21 10:10:18.242715 kubelet[2751]: E0421 10:10:18.242638 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:18.345009 kubelet[2751]: E0421 10:10:18.344763 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.345009 kubelet[2751]: W0421 10:10:18.344792 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.345009 kubelet[2751]: E0421 10:10:18.344819 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.345473 kubelet[2751]: E0421 10:10:18.345354 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.345473 kubelet[2751]: W0421 10:10:18.345371 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.345473 kubelet[2751]: E0421 10:10:18.345386 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.347234 kubelet[2751]: E0421 10:10:18.346192 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.347234 kubelet[2751]: W0421 10:10:18.346211 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.347234 kubelet[2751]: E0421 10:10:18.346229 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.347560 kubelet[2751]: E0421 10:10:18.347521 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.347560 kubelet[2751]: W0421 10:10:18.347546 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.347560 kubelet[2751]: E0421 10:10:18.347565 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.348168 kubelet[2751]: E0421 10:10:18.348095 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.348168 kubelet[2751]: W0421 10:10:18.348115 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.348168 kubelet[2751]: E0421 10:10:18.348130 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.349105 kubelet[2751]: E0421 10:10:18.349047 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.349380 kubelet[2751]: W0421 10:10:18.349192 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.349380 kubelet[2751]: E0421 10:10:18.349211 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.349803 kubelet[2751]: E0421 10:10:18.349715 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.349803 kubelet[2751]: W0421 10:10:18.349732 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.350085 kubelet[2751]: E0421 10:10:18.349898 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.350957 kubelet[2751]: E0421 10:10:18.350706 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.350957 kubelet[2751]: W0421 10:10:18.350724 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.350957 kubelet[2751]: E0421 10:10:18.350740 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.351289 kubelet[2751]: E0421 10:10:18.351222 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.351289 kubelet[2751]: W0421 10:10:18.351252 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.351289 kubelet[2751]: E0421 10:10:18.351270 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.352215 kubelet[2751]: E0421 10:10:18.352170 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.352215 kubelet[2751]: W0421 10:10:18.352196 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.352215 kubelet[2751]: E0421 10:10:18.352211 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.352971 kubelet[2751]: E0421 10:10:18.352934 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.352971 kubelet[2751]: W0421 10:10:18.352960 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.353060 kubelet[2751]: E0421 10:10:18.352973 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.353550 kubelet[2751]: E0421 10:10:18.353529 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.353550 kubelet[2751]: W0421 10:10:18.353547 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.353650 kubelet[2751]: E0421 10:10:18.353559 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.354162 kubelet[2751]: E0421 10:10:18.354107 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.354162 kubelet[2751]: W0421 10:10:18.354124 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.354162 kubelet[2751]: E0421 10:10:18.354137 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.355880 kubelet[2751]: E0421 10:10:18.355194 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.355880 kubelet[2751]: W0421 10:10:18.355211 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.355880 kubelet[2751]: E0421 10:10:18.355225 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.356058 kubelet[2751]: E0421 10:10:18.355970 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.356058 kubelet[2751]: W0421 10:10:18.355983 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.356058 kubelet[2751]: E0421 10:10:18.355996 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.370121 kubelet[2751]: E0421 10:10:18.370097 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.370242 kubelet[2751]: W0421 10:10:18.370225 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.370312 kubelet[2751]: E0421 10:10:18.370298 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.370789 kubelet[2751]: E0421 10:10:18.370771 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.370899 kubelet[2751]: W0421 10:10:18.370885 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.371010 kubelet[2751]: E0421 10:10:18.370996 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.371520 kubelet[2751]: E0421 10:10:18.371504 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.371611 kubelet[2751]: W0421 10:10:18.371598 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.371735 kubelet[2751]: E0421 10:10:18.371721 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.372087 kubelet[2751]: E0421 10:10:18.372062 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.372154 kubelet[2751]: W0421 10:10:18.372090 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.372154 kubelet[2751]: E0421 10:10:18.372098 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.372357 kubelet[2751]: E0421 10:10:18.372346 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.372357 kubelet[2751]: W0421 10:10:18.372355 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.372449 kubelet[2751]: E0421 10:10:18.372361 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.372600 kubelet[2751]: E0421 10:10:18.372583 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.372600 kubelet[2751]: W0421 10:10:18.372592 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.372600 kubelet[2751]: E0421 10:10:18.372598 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.372892 kubelet[2751]: E0421 10:10:18.372871 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.372892 kubelet[2751]: W0421 10:10:18.372882 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.372892 kubelet[2751]: E0421 10:10:18.372889 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.373297 kubelet[2751]: E0421 10:10:18.373283 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.373297 kubelet[2751]: W0421 10:10:18.373291 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.373297 kubelet[2751]: E0421 10:10:18.373299 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.373614 kubelet[2751]: E0421 10:10:18.373593 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.373614 kubelet[2751]: W0421 10:10:18.373603 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.373614 kubelet[2751]: E0421 10:10:18.373610 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.373936 kubelet[2751]: E0421 10:10:18.373910 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.373936 kubelet[2751]: W0421 10:10:18.373920 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.373936 kubelet[2751]: E0421 10:10:18.373926 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.374283 kubelet[2751]: E0421 10:10:18.374260 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.374283 kubelet[2751]: W0421 10:10:18.374281 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.374349 kubelet[2751]: E0421 10:10:18.374295 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.374628 kubelet[2751]: E0421 10:10:18.374609 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.374652 kubelet[2751]: W0421 10:10:18.374626 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.374652 kubelet[2751]: E0421 10:10:18.374639 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.375000 kubelet[2751]: E0421 10:10:18.374982 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.375024 kubelet[2751]: W0421 10:10:18.374999 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.375024 kubelet[2751]: E0421 10:10:18.375010 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.375486 kubelet[2751]: E0421 10:10:18.375466 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.375511 kubelet[2751]: W0421 10:10:18.375501 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.375532 kubelet[2751]: E0421 10:10:18.375514 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.375888 kubelet[2751]: E0421 10:10:18.375868 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.375914 kubelet[2751]: W0421 10:10:18.375887 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.375914 kubelet[2751]: E0421 10:10:18.375899 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.376223 kubelet[2751]: E0421 10:10:18.376206 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.376247 kubelet[2751]: W0421 10:10:18.376223 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.376247 kubelet[2751]: E0421 10:10:18.376235 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.376535 kubelet[2751]: E0421 10:10:18.376490 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.376535 kubelet[2751]: W0421 10:10:18.376499 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.376535 kubelet[2751]: E0421 10:10:18.376506 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:18.376942 kubelet[2751]: E0421 10:10:18.376902 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:18.376942 kubelet[2751]: W0421 10:10:18.376912 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:18.376942 kubelet[2751]: E0421 10:10:18.376918 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.301784 kubelet[2751]: I0421 10:10:19.301707 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.361511 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.363882 kubelet[2751]: W0421 10:10:19.361539 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.361568 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.362128 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.363882 kubelet[2751]: W0421 10:10:19.362148 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.362204 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.362827 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.363882 kubelet[2751]: W0421 10:10:19.362891 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.362904 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.363882 kubelet[2751]: E0421 10:10:19.363700 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.364497 kubelet[2751]: W0421 10:10:19.363715 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.364497 kubelet[2751]: E0421 10:10:19.363728 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.364592 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.367156 kubelet[2751]: W0421 10:10:19.364623 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.364670 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.365210 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.367156 kubelet[2751]: W0421 10:10:19.365223 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.365236 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.365795 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.367156 kubelet[2751]: W0421 10:10:19.365809 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.365822 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.367156 kubelet[2751]: E0421 10:10:19.366618 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.367797 kubelet[2751]: W0421 10:10:19.366636 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.367797 kubelet[2751]: E0421 10:10:19.366651 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.368139 kubelet[2751]: E0421 10:10:19.368064 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.368139 kubelet[2751]: W0421 10:10:19.368127 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.368139 kubelet[2751]: E0421 10:10:19.368142 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.369173 kubelet[2751]: E0421 10:10:19.368742 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.369173 kubelet[2751]: W0421 10:10:19.368808 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.369173 kubelet[2751]: E0421 10:10:19.368823 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.370135 kubelet[2751]: E0421 10:10:19.369641 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.370135 kubelet[2751]: W0421 10:10:19.369660 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.370135 kubelet[2751]: E0421 10:10:19.369678 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.370674 kubelet[2751]: E0421 10:10:19.370641 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.370674 kubelet[2751]: W0421 10:10:19.370665 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.371030 kubelet[2751]: E0421 10:10:19.370683 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.371185 kubelet[2751]: E0421 10:10:19.371109 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.371185 kubelet[2751]: W0421 10:10:19.371141 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.371185 kubelet[2751]: E0421 10:10:19.371154 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.372816 kubelet[2751]: E0421 10:10:19.372566 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.372816 kubelet[2751]: W0421 10:10:19.372601 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.372816 kubelet[2751]: E0421 10:10:19.372620 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.373338 kubelet[2751]: E0421 10:10:19.373295 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.373338 kubelet[2751]: W0421 10:10:19.373317 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.373338 kubelet[2751]: E0421 10:10:19.373331 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.378641 kubelet[2751]: E0421 10:10:19.378607 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.378641 kubelet[2751]: W0421 10:10:19.378633 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.378866 kubelet[2751]: E0421 10:10:19.378652 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.379295 kubelet[2751]: E0421 10:10:19.379183 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.379295 kubelet[2751]: W0421 10:10:19.379205 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.379295 kubelet[2751]: E0421 10:10:19.379224 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.379716 kubelet[2751]: E0421 10:10:19.379682 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.379716 kubelet[2751]: W0421 10:10:19.379706 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.379876 kubelet[2751]: E0421 10:10:19.379722 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.380326 kubelet[2751]: E0421 10:10:19.380296 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.380326 kubelet[2751]: W0421 10:10:19.380316 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.380476 kubelet[2751]: E0421 10:10:19.380330 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.380882 kubelet[2751]: E0421 10:10:19.380820 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.380950 kubelet[2751]: W0421 10:10:19.380867 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.380950 kubelet[2751]: E0421 10:10:19.380913 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.381416 kubelet[2751]: E0421 10:10:19.381384 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.381490 kubelet[2751]: W0421 10:10:19.381405 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.381490 kubelet[2751]: E0421 10:10:19.381484 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.382078 kubelet[2751]: E0421 10:10:19.382049 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.382078 kubelet[2751]: W0421 10:10:19.382071 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.382215 kubelet[2751]: E0421 10:10:19.382088 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.382610 kubelet[2751]: E0421 10:10:19.382574 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.382610 kubelet[2751]: W0421 10:10:19.382599 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.383142 kubelet[2751]: E0421 10:10:19.382617 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.383203 kubelet[2751]: E0421 10:10:19.383187 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.383241 kubelet[2751]: W0421 10:10:19.383204 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.383241 kubelet[2751]: E0421 10:10:19.383220 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.383697 kubelet[2751]: E0421 10:10:19.383668 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.383697 kubelet[2751]: W0421 10:10:19.383688 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.383938 kubelet[2751]: E0421 10:10:19.383702 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.384246 kubelet[2751]: E0421 10:10:19.384218 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.384246 kubelet[2751]: W0421 10:10:19.384241 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.384347 kubelet[2751]: E0421 10:10:19.384257 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.384821 kubelet[2751]: E0421 10:10:19.384723 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.384821 kubelet[2751]: W0421 10:10:19.384762 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.384821 kubelet[2751]: E0421 10:10:19.384778 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.386041 kubelet[2751]: E0421 10:10:19.385996 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.386041 kubelet[2751]: W0421 10:10:19.386027 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.386268 kubelet[2751]: E0421 10:10:19.386046 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.386539 kubelet[2751]: E0421 10:10:19.386512 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.386539 kubelet[2751]: W0421 10:10:19.386533 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.386652 kubelet[2751]: E0421 10:10:19.386547 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.387100 kubelet[2751]: E0421 10:10:19.387060 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.387100 kubelet[2751]: W0421 10:10:19.387098 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.387196 kubelet[2751]: E0421 10:10:19.387123 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.387668 kubelet[2751]: E0421 10:10:19.387625 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.387668 kubelet[2751]: W0421 10:10:19.387659 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.387796 kubelet[2751]: E0421 10:10:19.387678 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.388322 kubelet[2751]: E0421 10:10:19.388284 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.388322 kubelet[2751]: W0421 10:10:19.388312 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.388433 kubelet[2751]: E0421 10:10:19.388328 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.389112 kubelet[2751]: E0421 10:10:19.389078 2751 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:19.389112 kubelet[2751]: W0421 10:10:19.389101 2751 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:19.389225 kubelet[2751]: E0421 10:10:19.389116 2751 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:19.649743 containerd[1641]: time="2026-04-21T10:10:19.649624440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:19.651923 containerd[1641]: time="2026-04-21T10:10:19.651716909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:10:19.653068 containerd[1641]: time="2026-04-21T10:10:19.653014743Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:19.655862 containerd[1641]: time="2026-04-21T10:10:19.655795948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:19.656466 containerd[1641]: time="2026-04-21T10:10:19.656424123Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 2.05986717s" Apr 21 10:10:19.656508 containerd[1641]: time="2026-04-21T10:10:19.656465976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:10:19.661198 containerd[1641]: time="2026-04-21T10:10:19.661097576Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:10:19.678573 containerd[1641]: time="2026-04-21T10:10:19.678530790Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"97a1f2099da86b26bc0044294d4a6fdd8a27f14381ad5a3d1a8a31751f1be2d5\"" Apr 21 10:10:19.679672 containerd[1641]: time="2026-04-21T10:10:19.679648101Z" level=info msg="StartContainer for \"97a1f2099da86b26bc0044294d4a6fdd8a27f14381ad5a3d1a8a31751f1be2d5\"" Apr 21 10:10:19.740113 containerd[1641]: time="2026-04-21T10:10:19.740056174Z" level=info msg="StartContainer for \"97a1f2099da86b26bc0044294d4a6fdd8a27f14381ad5a3d1a8a31751f1be2d5\" returns successfully" Apr 21 10:10:19.775918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97a1f2099da86b26bc0044294d4a6fdd8a27f14381ad5a3d1a8a31751f1be2d5-rootfs.mount: Deactivated successfully. Apr 21 10:10:20.018374 containerd[1641]: time="2026-04-21T10:10:20.018226324Z" level=info msg="shim disconnected" id=97a1f2099da86b26bc0044294d4a6fdd8a27f14381ad5a3d1a8a31751f1be2d5 namespace=k8s.io Apr 21 10:10:20.018374 containerd[1641]: time="2026-04-21T10:10:20.018294748Z" level=warning msg="cleaning up after shim disconnected" id=97a1f2099da86b26bc0044294d4a6fdd8a27f14381ad5a3d1a8a31751f1be2d5 namespace=k8s.io Apr 21 10:10:20.018374 containerd[1641]: time="2026-04-21T10:10:20.018311033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:10:20.237203 kubelet[2751]: E0421 10:10:20.236305 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:20.311040 containerd[1641]: time="2026-04-21T10:10:20.308353797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:10:20.335077 kubelet[2751]: I0421 10:10:20.332728 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-864985f99d-mbrmm" podStartSLOduration=3.794833717 podStartE2EDuration="7.331364638s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:14.058754939 +0000 UTC m=+15.906852787" lastFinishedPulling="2026-04-21 10:10:17.59528586 +0000 UTC m=+19.443383708" observedRunningTime="2026-04-21 10:10:18.316781733 +0000 UTC m=+20.164879631" watchObservedRunningTime="2026-04-21 10:10:20.331364638 +0000 UTC m=+22.179462516" Apr 21 10:10:22.237097 kubelet[2751]: E0421 10:10:22.236610 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:24.237651 kubelet[2751]: E0421 10:10:24.237164 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:26.237113 kubelet[2751]: E0421 10:10:26.236231 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:28.236871 kubelet[2751]: E0421 10:10:28.236819 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:29.706434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515914320.mount: Deactivated successfully. Apr 21 10:10:29.731084 containerd[1641]: time="2026-04-21T10:10:29.731024505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:29.732216 containerd[1641]: time="2026-04-21T10:10:29.732098278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:10:29.733345 containerd[1641]: time="2026-04-21T10:10:29.732948375Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:29.734684 containerd[1641]: time="2026-04-21T10:10:29.734653627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:29.735510 containerd[1641]: time="2026-04-21T10:10:29.735163995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.424532911s" Apr 21 10:10:29.735510 containerd[1641]: time="2026-04-21T10:10:29.735200099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:10:29.738479 containerd[1641]: time="2026-04-21T10:10:29.738446136Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:10:29.758354 containerd[1641]: time="2026-04-21T10:10:29.758309311Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a3ca2743f3887bc223a99078430b3fb601dcd0561cbd502d5f5ec035ea1751fb\"" Apr 21 10:10:29.759107 containerd[1641]: time="2026-04-21T10:10:29.758742752Z" level=info msg="StartContainer for \"a3ca2743f3887bc223a99078430b3fb601dcd0561cbd502d5f5ec035ea1751fb\"" Apr 21 10:10:29.810855 containerd[1641]: time="2026-04-21T10:10:29.810162401Z" level=info msg="StartContainer for \"a3ca2743f3887bc223a99078430b3fb601dcd0561cbd502d5f5ec035ea1751fb\" returns successfully" Apr 21 10:10:29.999431 containerd[1641]: time="2026-04-21T10:10:29.999344603Z" level=info msg="shim disconnected" id=a3ca2743f3887bc223a99078430b3fb601dcd0561cbd502d5f5ec035ea1751fb namespace=k8s.io Apr 21 10:10:29.999431 containerd[1641]: time="2026-04-21T10:10:29.999412526Z" level=warning msg="cleaning up after shim disconnected" id=a3ca2743f3887bc223a99078430b3fb601dcd0561cbd502d5f5ec035ea1751fb namespace=k8s.io Apr 21 10:10:29.999431 containerd[1641]: time="2026-04-21T10:10:29.999422280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:10:30.018166 containerd[1641]: time="2026-04-21T10:10:30.018099380Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:10:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:10:30.237241 kubelet[2751]: E0421 10:10:30.236620 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:30.333740 containerd[1641]: time="2026-04-21T10:10:30.333559548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:10:30.426044 kubelet[2751]: I0421 10:10:30.425769 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:30.710768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3ca2743f3887bc223a99078430b3fb601dcd0561cbd502d5f5ec035ea1751fb-rootfs.mount: Deactivated successfully. Apr 21 10:10:32.240398 kubelet[2751]: E0421 10:10:32.240328 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:34.237347 kubelet[2751]: E0421 10:10:34.237175 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7trjk" podUID="83f3b362-fdfe-49c7-9fa1-a2d602501612" Apr 21 10:10:34.454618 containerd[1641]: time="2026-04-21T10:10:34.454557034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:34.455708 containerd[1641]: time="2026-04-21T10:10:34.455546060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:10:34.456552 containerd[1641]: time="2026-04-21T10:10:34.456445982Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:34.458128 containerd[1641]: time="2026-04-21T10:10:34.458100466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:34.458940 containerd[1641]: time="2026-04-21T10:10:34.458549010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.124945486s" Apr 21 10:10:34.458940 containerd[1641]: time="2026-04-21T10:10:34.458573767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:10:34.461720 containerd[1641]: time="2026-04-21T10:10:34.461698720Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:10:34.476585 containerd[1641]: time="2026-04-21T10:10:34.476554201Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244\"" Apr 21 10:10:34.477493 containerd[1641]: time="2026-04-21T10:10:34.477470797Z" level=info msg="StartContainer for \"cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244\"" Apr 21 10:10:34.504648 systemd[1]: run-containerd-runc-k8s.io-cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244-runc.xoLJf0.mount: Deactivated successfully. Apr 21 10:10:34.537260 containerd[1641]: time="2026-04-21T10:10:34.537225118Z" level=info msg="StartContainer for \"cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244\" returns successfully" Apr 21 10:10:34.950170 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244-rootfs.mount: Deactivated successfully. Apr 21 10:10:34.952002 containerd[1641]: time="2026-04-21T10:10:34.951818968Z" level=info msg="shim disconnected" id=cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244 namespace=k8s.io Apr 21 10:10:34.952002 containerd[1641]: time="2026-04-21T10:10:34.951987591Z" level=warning msg="cleaning up after shim disconnected" id=cc506e4d5fef2aad5ba709d5fa5cb02a9ce1bb8130fcb941c877156bedcee244 namespace=k8s.io Apr 21 10:10:34.952099 containerd[1641]: time="2026-04-21T10:10:34.952013930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:10:35.002471 kubelet[2751]: I0421 10:10:35.002207 2751 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:10:35.103331 kubelet[2751]: I0421 10:10:35.103025 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/062b1fa6-3260-4a3c-bfe5-a369e62e02c2-calico-apiserver-certs\") pod \"calico-apiserver-7f8dccf684-wzcc5\" (UID: \"062b1fa6-3260-4a3c-bfe5-a369e62e02c2\") " pod="calico-system/calico-apiserver-7f8dccf684-wzcc5" Apr 21 10:10:35.103331 kubelet[2751]: I0421 10:10:35.103070 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlcn\" (UniqueName: \"kubernetes.io/projected/062b1fa6-3260-4a3c-bfe5-a369e62e02c2-kube-api-access-nnlcn\") pod \"calico-apiserver-7f8dccf684-wzcc5\" (UID: \"062b1fa6-3260-4a3c-bfe5-a369e62e02c2\") " pod="calico-system/calico-apiserver-7f8dccf684-wzcc5" Apr 21 10:10:35.103331 kubelet[2751]: I0421 10:10:35.103084 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badb72fe-d615-4587-8706-aaaa53adf7ae-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-pwqjb\" (UID: \"badb72fe-d615-4587-8706-aaaa53adf7ae\") " pod="calico-system/goldmane-5b85766d88-pwqjb" Apr 21 10:10:35.103331 kubelet[2751]: I0421 10:10:35.103095 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1428411e-a64c-4ee4-8cf2-81f0993ef792-config-volume\") pod \"coredns-674b8bbfcf-sdrhd\" (UID: \"1428411e-a64c-4ee4-8cf2-81f0993ef792\") " pod="kube-system/coredns-674b8bbfcf-sdrhd" Apr 21 10:10:35.103331 kubelet[2751]: I0421 10:10:35.103107 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-backend-key-pair\") pod \"whisker-554757894f-j76q7\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " pod="calico-system/whisker-554757894f-j76q7" Apr 21 10:10:35.103541 kubelet[2751]: I0421 10:10:35.103117 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a97df2fa-3c38-4d58-b70b-3760e19c0ab6-calico-apiserver-certs\") pod \"calico-apiserver-7f8dccf684-ft4jt\" (UID: \"a97df2fa-3c38-4d58-b70b-3760e19c0ab6\") " pod="calico-system/calico-apiserver-7f8dccf684-ft4jt" Apr 21 10:10:35.103541 kubelet[2751]: I0421 10:10:35.103145 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvn7n\" (UniqueName: \"kubernetes.io/projected/62a145a7-f01c-43dc-b01f-ede287e53eec-kube-api-access-lvn7n\") pod \"calico-kube-controllers-7f669854f8-7l8ws\" (UID: \"62a145a7-f01c-43dc-b01f-ede287e53eec\") " pod="calico-system/calico-kube-controllers-7f669854f8-7l8ws" Apr 21 10:10:35.103541 kubelet[2751]: I0421 10:10:35.103156 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p9b9\" (UniqueName: \"kubernetes.io/projected/badb72fe-d615-4587-8706-aaaa53adf7ae-kube-api-access-6p9b9\") pod \"goldmane-5b85766d88-pwqjb\" (UID: \"badb72fe-d615-4587-8706-aaaa53adf7ae\") " pod="calico-system/goldmane-5b85766d88-pwqjb" Apr 21 10:10:35.103541 kubelet[2751]: I0421 10:10:35.103166 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-nginx-config\") pod \"whisker-554757894f-j76q7\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " pod="calico-system/whisker-554757894f-j76q7" Apr 21 10:10:35.103541 kubelet[2751]: I0421 10:10:35.103179 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/badb72fe-d615-4587-8706-aaaa53adf7ae-config\") pod \"goldmane-5b85766d88-pwqjb\" (UID: \"badb72fe-d615-4587-8706-aaaa53adf7ae\") " pod="calico-system/goldmane-5b85766d88-pwqjb" Apr 21 10:10:35.103627 kubelet[2751]: I0421 10:10:35.103191 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcgw\" (UniqueName: \"kubernetes.io/projected/1428411e-a64c-4ee4-8cf2-81f0993ef792-kube-api-access-zgcgw\") pod \"coredns-674b8bbfcf-sdrhd\" (UID: \"1428411e-a64c-4ee4-8cf2-81f0993ef792\") " pod="kube-system/coredns-674b8bbfcf-sdrhd" Apr 21 10:10:35.103627 kubelet[2751]: I0421 10:10:35.103202 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/badb72fe-d615-4587-8706-aaaa53adf7ae-goldmane-key-pair\") pod \"goldmane-5b85766d88-pwqjb\" (UID: \"badb72fe-d615-4587-8706-aaaa53adf7ae\") " pod="calico-system/goldmane-5b85766d88-pwqjb" Apr 21 10:10:35.103627 kubelet[2751]: I0421 10:10:35.103230 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-ca-bundle\") pod \"whisker-554757894f-j76q7\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " pod="calico-system/whisker-554757894f-j76q7" Apr 21 10:10:35.103627 kubelet[2751]: I0421 10:10:35.103240 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/570e5e68-cc52-417a-8289-d4541cfa3d48-config-volume\") pod \"coredns-674b8bbfcf-z6wr9\" (UID: \"570e5e68-cc52-417a-8289-d4541cfa3d48\") " pod="kube-system/coredns-674b8bbfcf-z6wr9" Apr 21 10:10:35.103627 kubelet[2751]: I0421 10:10:35.103251 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tldkw\" (UniqueName: \"kubernetes.io/projected/570e5e68-cc52-417a-8289-d4541cfa3d48-kube-api-access-tldkw\") pod \"coredns-674b8bbfcf-z6wr9\" (UID: \"570e5e68-cc52-417a-8289-d4541cfa3d48\") " pod="kube-system/coredns-674b8bbfcf-z6wr9" Apr 21 10:10:35.103724 kubelet[2751]: I0421 10:10:35.103341 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkzx\" (UniqueName: \"kubernetes.io/projected/e8f481da-cd9d-4039-816a-786b14831a9e-kube-api-access-xqkzx\") pod \"whisker-554757894f-j76q7\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " pod="calico-system/whisker-554757894f-j76q7" Apr 21 10:10:35.103724 kubelet[2751]: I0421 10:10:35.103353 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j5b\" (UniqueName: \"kubernetes.io/projected/a97df2fa-3c38-4d58-b70b-3760e19c0ab6-kube-api-access-t9j5b\") pod \"calico-apiserver-7f8dccf684-ft4jt\" (UID: \"a97df2fa-3c38-4d58-b70b-3760e19c0ab6\") " pod="calico-system/calico-apiserver-7f8dccf684-ft4jt" Apr 21 10:10:35.103724 kubelet[2751]: I0421 10:10:35.103390 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62a145a7-f01c-43dc-b01f-ede287e53eec-tigera-ca-bundle\") pod \"calico-kube-controllers-7f669854f8-7l8ws\" (UID: \"62a145a7-f01c-43dc-b01f-ede287e53eec\") " pod="calico-system/calico-kube-controllers-7f669854f8-7l8ws" Apr 21 10:10:35.341058 containerd[1641]: time="2026-04-21T10:10:35.340987665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdrhd,Uid:1428411e-a64c-4ee4-8cf2-81f0993ef792,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:35.352916 containerd[1641]: time="2026-04-21T10:10:35.352561094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-wzcc5,Uid:062b1fa6-3260-4a3c-bfe5-a369e62e02c2,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:35.358422 containerd[1641]: time="2026-04-21T10:10:35.357307492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-ft4jt,Uid:a97df2fa-3c38-4d58-b70b-3760e19c0ab6,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:35.363990 containerd[1641]: time="2026-04-21T10:10:35.363952271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z6wr9,Uid:570e5e68-cc52-417a-8289-d4541cfa3d48,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:35.364154 containerd[1641]: time="2026-04-21T10:10:35.364127614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f669854f8-7l8ws,Uid:62a145a7-f01c-43dc-b01f-ede287e53eec,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:35.368613 containerd[1641]: time="2026-04-21T10:10:35.368574832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pwqjb,Uid:badb72fe-d615-4587-8706-aaaa53adf7ae,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:35.371112 containerd[1641]: time="2026-04-21T10:10:35.371066504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554757894f-j76q7,Uid:e8f481da-cd9d-4039-816a-786b14831a9e,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:35.371428 containerd[1641]: time="2026-04-21T10:10:35.371310721Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:10:35.429443 containerd[1641]: time="2026-04-21T10:10:35.429408848Z" level=info msg="CreateContainer within sandbox \"3f423a9529d4a50d5265a7228a49e7062ed2a2b9e240f07448e5af9b4a739a77\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9b509d46977fa89947568bdd0632c7bbf503951233d710e2d4a0e6481202bb63\"" Apr 21 10:10:35.430804 containerd[1641]: time="2026-04-21T10:10:35.430249971Z" level=info msg="StartContainer for \"9b509d46977fa89947568bdd0632c7bbf503951233d710e2d4a0e6481202bb63\"" Apr 21 10:10:35.545674 systemd[1]: run-containerd-runc-k8s.io-9b509d46977fa89947568bdd0632c7bbf503951233d710e2d4a0e6481202bb63-runc.be8NP6.mount: Deactivated successfully. Apr 21 10:10:35.561215 containerd[1641]: time="2026-04-21T10:10:35.560765888Z" level=error msg="Failed to destroy network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.561215 containerd[1641]: time="2026-04-21T10:10:35.561070847Z" level=error msg="encountered an error cleaning up failed sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.561215 containerd[1641]: time="2026-04-21T10:10:35.561102975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-wzcc5,Uid:062b1fa6-3260-4a3c-bfe5-a369e62e02c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.562540 kubelet[2751]: E0421 10:10:35.561256 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.562540 kubelet[2751]: E0421 10:10:35.561308 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f8dccf684-wzcc5" Apr 21 10:10:35.562540 kubelet[2751]: E0421 10:10:35.561325 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f8dccf684-wzcc5" Apr 21 10:10:35.562819 kubelet[2751]: E0421 10:10:35.561382 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f8dccf684-wzcc5_calico-system(062b1fa6-3260-4a3c-bfe5-a369e62e02c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f8dccf684-wzcc5_calico-system(062b1fa6-3260-4a3c-bfe5-a369e62e02c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f8dccf684-wzcc5" podUID="062b1fa6-3260-4a3c-bfe5-a369e62e02c2" Apr 21 10:10:35.614243 containerd[1641]: time="2026-04-21T10:10:35.614062345Z" level=error msg="Failed to destroy network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.614243 containerd[1641]: time="2026-04-21T10:10:35.614212060Z" level=error msg="Failed to destroy network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.616535 containerd[1641]: time="2026-04-21T10:10:35.616505213Z" level=error msg="encountered an error cleaning up failed sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.616705 containerd[1641]: time="2026-04-21T10:10:35.616550281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdrhd,Uid:1428411e-a64c-4ee4-8cf2-81f0993ef792,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.616764 containerd[1641]: time="2026-04-21T10:10:35.616747177Z" level=error msg="encountered an error cleaning up failed sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.616785 containerd[1641]: time="2026-04-21T10:10:35.616769821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z6wr9,Uid:570e5e68-cc52-417a-8289-d4541cfa3d48,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.616993 kubelet[2751]: E0421 10:10:35.616961 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.617040 kubelet[2751]: E0421 10:10:35.617014 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sdrhd" Apr 21 10:10:35.617040 kubelet[2751]: E0421 10:10:35.617030 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sdrhd" Apr 21 10:10:35.617115 kubelet[2751]: E0421 10:10:35.617069 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sdrhd_kube-system(1428411e-a64c-4ee4-8cf2-81f0993ef792)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sdrhd_kube-system(1428411e-a64c-4ee4-8cf2-81f0993ef792)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sdrhd" podUID="1428411e-a64c-4ee4-8cf2-81f0993ef792" Apr 21 10:10:35.617366 kubelet[2751]: E0421 10:10:35.617344 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.617391 kubelet[2751]: E0421 10:10:35.617369 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z6wr9" Apr 21 10:10:35.617391 kubelet[2751]: E0421 10:10:35.617381 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z6wr9" Apr 21 10:10:35.617445 kubelet[2751]: E0421 10:10:35.617403 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z6wr9_kube-system(570e5e68-cc52-417a-8289-d4541cfa3d48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z6wr9_kube-system(570e5e68-cc52-417a-8289-d4541cfa3d48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z6wr9" podUID="570e5e68-cc52-417a-8289-d4541cfa3d48" Apr 21 10:10:35.634698 containerd[1641]: time="2026-04-21T10:10:35.634596792Z" level=info msg="StartContainer for \"9b509d46977fa89947568bdd0632c7bbf503951233d710e2d4a0e6481202bb63\" returns successfully" Apr 21 10:10:35.640791 containerd[1641]: time="2026-04-21T10:10:35.640749692Z" level=error msg="Failed to destroy network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.641309 containerd[1641]: time="2026-04-21T10:10:35.641107720Z" level=error msg="encountered an error cleaning up failed sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.641501 containerd[1641]: time="2026-04-21T10:10:35.641477004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f669854f8-7l8ws,Uid:62a145a7-f01c-43dc-b01f-ede287e53eec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.642381 kubelet[2751]: E0421 10:10:35.641679 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.642381 kubelet[2751]: E0421 10:10:35.641716 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f669854f8-7l8ws" Apr 21 10:10:35.642381 kubelet[2751]: E0421 10:10:35.641739 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f669854f8-7l8ws" Apr 21 10:10:35.642477 kubelet[2751]: E0421 10:10:35.641780 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f669854f8-7l8ws_calico-system(62a145a7-f01c-43dc-b01f-ede287e53eec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f669854f8-7l8ws_calico-system(62a145a7-f01c-43dc-b01f-ede287e53eec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f669854f8-7l8ws" podUID="62a145a7-f01c-43dc-b01f-ede287e53eec" Apr 21 10:10:35.650168 containerd[1641]: time="2026-04-21T10:10:35.650130950Z" level=error msg="Failed to destroy network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.650459 containerd[1641]: time="2026-04-21T10:10:35.650439844Z" level=error msg="encountered an error cleaning up failed sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.650502 containerd[1641]: time="2026-04-21T10:10:35.650485943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-ft4jt,Uid:a97df2fa-3c38-4d58-b70b-3760e19c0ab6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.651603 kubelet[2751]: E0421 10:10:35.650617 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.651603 kubelet[2751]: E0421 10:10:35.650652 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f8dccf684-ft4jt" Apr 21 10:10:35.651603 kubelet[2751]: E0421 10:10:35.650679 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7f8dccf684-ft4jt" Apr 21 10:10:35.651948 kubelet[2751]: E0421 10:10:35.650713 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f8dccf684-ft4jt_calico-system(a97df2fa-3c38-4d58-b70b-3760e19c0ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f8dccf684-ft4jt_calico-system(a97df2fa-3c38-4d58-b70b-3760e19c0ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7f8dccf684-ft4jt" podUID="a97df2fa-3c38-4d58-b70b-3760e19c0ab6" Apr 21 10:10:35.663929 containerd[1641]: time="2026-04-21T10:10:35.663781338Z" level=error msg="Failed to destroy network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.664521 containerd[1641]: time="2026-04-21T10:10:35.664145385Z" level=error msg="encountered an error cleaning up failed sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.664521 containerd[1641]: time="2026-04-21T10:10:35.664194699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554757894f-j76q7,Uid:e8f481da-cd9d-4039-816a-786b14831a9e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.664598 kubelet[2751]: E0421 10:10:35.664423 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.665105 kubelet[2751]: E0421 10:10:35.664513 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-554757894f-j76q7" Apr 21 10:10:35.665105 kubelet[2751]: E0421 10:10:35.664690 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-554757894f-j76q7" Apr 21 10:10:35.665105 kubelet[2751]: E0421 10:10:35.664765 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-554757894f-j76q7_calico-system(e8f481da-cd9d-4039-816a-786b14831a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-554757894f-j76q7_calico-system(e8f481da-cd9d-4039-816a-786b14831a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-554757894f-j76q7" podUID="e8f481da-cd9d-4039-816a-786b14831a9e" Apr 21 10:10:35.670956 containerd[1641]: time="2026-04-21T10:10:35.670925047Z" level=error msg="Failed to destroy network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.671345 containerd[1641]: time="2026-04-21T10:10:35.671239328Z" level=error msg="encountered an error cleaning up failed sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.671345 containerd[1641]: time="2026-04-21T10:10:35.671269904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pwqjb,Uid:badb72fe-d615-4587-8706-aaaa53adf7ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.671420 kubelet[2751]: E0421 10:10:35.671385 2751 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:35.671420 kubelet[2751]: E0421 10:10:35.671415 2751 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pwqjb" Apr 21 10:10:35.671455 kubelet[2751]: E0421 10:10:35.671429 2751 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-pwqjb" Apr 21 10:10:35.671766 kubelet[2751]: E0421 10:10:35.671462 2751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-pwqjb_calico-system(badb72fe-d615-4587-8706-aaaa53adf7ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-pwqjb_calico-system(badb72fe-d615-4587-8706-aaaa53adf7ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-pwqjb" podUID="badb72fe-d615-4587-8706-aaaa53adf7ae" Apr 21 10:10:36.242829 containerd[1641]: time="2026-04-21T10:10:36.242756769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7trjk,Uid:83f3b362-fdfe-49c7-9fa1-a2d602501612,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:36.351624 kubelet[2751]: I0421 10:10:36.351065 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:36.355787 containerd[1641]: time="2026-04-21T10:10:36.354121916Z" level=info msg="StopPodSandbox for \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\"" Apr 21 10:10:36.355787 containerd[1641]: time="2026-04-21T10:10:36.354360354Z" level=info msg="Ensure that sandbox c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87 in task-service has been cleanup successfully" Apr 21 10:10:36.362026 kubelet[2751]: I0421 10:10:36.361271 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:36.371141 containerd[1641]: time="2026-04-21T10:10:36.371108573Z" level=info msg="StopPodSandbox for \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\"" Apr 21 10:10:36.371596 containerd[1641]: time="2026-04-21T10:10:36.371479611Z" level=info msg="Ensure that sandbox 90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a in task-service has been cleanup successfully" Apr 21 10:10:36.372841 kubelet[2751]: I0421 10:10:36.372809 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:36.374287 containerd[1641]: time="2026-04-21T10:10:36.374019134Z" level=info msg="StopPodSandbox for \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\"" Apr 21 10:10:36.377071 containerd[1641]: time="2026-04-21T10:10:36.376414681Z" level=info msg="Ensure that sandbox 99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06 in task-service has been cleanup successfully" Apr 21 10:10:36.377129 kubelet[2751]: I0421 10:10:36.377012 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:36.378491 kubelet[2751]: I0421 10:10:36.378254 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sw6qg" podStartSLOduration=3.10159623 podStartE2EDuration="23.378245491s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:14.182505389 +0000 UTC m=+16.030603237" lastFinishedPulling="2026-04-21 10:10:34.45915465 +0000 UTC m=+36.307252498" observedRunningTime="2026-04-21 10:10:36.37622717 +0000 UTC m=+38.224325028" watchObservedRunningTime="2026-04-21 10:10:36.378245491 +0000 UTC m=+38.226343339" Apr 21 10:10:36.380472 containerd[1641]: time="2026-04-21T10:10:36.380452003Z" level=info msg="StopPodSandbox for \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\"" Apr 21 10:10:36.380591 containerd[1641]: time="2026-04-21T10:10:36.380571704Z" level=info msg="Ensure that sandbox 6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf in task-service has been cleanup successfully" Apr 21 10:10:36.382584 kubelet[2751]: I0421 10:10:36.382543 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:36.384056 containerd[1641]: time="2026-04-21T10:10:36.383994113Z" level=info msg="StopPodSandbox for \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\"" Apr 21 10:10:36.384228 containerd[1641]: time="2026-04-21T10:10:36.384196005Z" level=info msg="Ensure that sandbox 73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03 in task-service has been cleanup successfully" Apr 21 10:10:36.384344 kubelet[2751]: I0421 10:10:36.384286 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:36.384667 containerd[1641]: time="2026-04-21T10:10:36.384644579Z" level=info msg="StopPodSandbox for \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\"" Apr 21 10:10:36.385425 containerd[1641]: time="2026-04-21T10:10:36.384819622Z" level=info msg="Ensure that sandbox ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764 in task-service has been cleanup successfully" Apr 21 10:10:36.390626 kubelet[2751]: I0421 10:10:36.390607 2751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:36.392432 containerd[1641]: time="2026-04-21T10:10:36.392032133Z" level=info msg="StopPodSandbox for \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\"" Apr 21 10:10:36.392432 containerd[1641]: time="2026-04-21T10:10:36.392162158Z" level=info msg="Ensure that sandbox e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46 in task-service has been cleanup successfully" Apr 21 10:10:36.449652 systemd-networkd[1269]: cali7fb5ab5fac5: Link UP Apr 21 10:10:36.450449 systemd-networkd[1269]: cali7fb5ab5fac5: Gained carrier Apr 21 10:10:36.471669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a-shm.mount: Deactivated successfully. Apr 21 10:10:36.471818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06-shm.mount: Deactivated successfully. Apr 21 10:10:36.472774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf-shm.mount: Deactivated successfully. Apr 21 10:10:36.473049 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764-shm.mount: Deactivated successfully. Apr 21 10:10:36.473180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46-shm.mount: Deactivated successfully. Apr 21 10:10:36.473412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87-shm.mount: Deactivated successfully. Apr 21 10:10:36.473566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03-shm.mount: Deactivated successfully. Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.297 [ERROR][3904] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.321 [INFO][3904] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0 csi-node-driver- calico-system 83f3b362-fdfe-49c7-9fa1-a2d602501612 688 0 2026-04-21 10:10:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f csi-node-driver-7trjk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7fb5ab5fac5 [] [] }} ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.321 [INFO][3904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.354 [INFO][3916] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" HandleID="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Workload="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.367 [INFO][3916] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" HandleID="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Workload="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-553501566f", "pod":"csi-node-driver-7trjk", "timestamp":"2026-04-21 10:10:36.354744712 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fd080)} Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.367 [INFO][3916] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.367 [INFO][3916] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.367 [INFO][3916] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.372 [INFO][3916] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.383 [INFO][3916] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.394 [INFO][3916] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.401 [INFO][3916] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.403 [INFO][3916] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.403 [INFO][3916] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.404 [INFO][3916] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.418 [INFO][3916] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.430 [INFO][3916] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.65/26] block=192.168.40.64/26 handle="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.430 [INFO][3916] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.65/26] handle="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.430 [INFO][3916] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.492195 containerd[1641]: 2026-04-21 10:10:36.430 [INFO][3916] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.65/26] IPv6=[] ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" HandleID="k8s-pod-network.19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Workload="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.492689 containerd[1641]: 2026-04-21 10:10:36.438 [INFO][3904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83f3b362-fdfe-49c7-9fa1-a2d602501612", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"csi-node-driver-7trjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fb5ab5fac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:36.492689 containerd[1641]: 2026-04-21 10:10:36.439 [INFO][3904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.65/32] ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.492689 containerd[1641]: 2026-04-21 10:10:36.439 [INFO][3904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fb5ab5fac5 ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.492689 containerd[1641]: 2026-04-21 10:10:36.451 [INFO][3904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.492689 containerd[1641]: 2026-04-21 10:10:36.451 [INFO][3904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83f3b362-fdfe-49c7-9fa1-a2d602501612", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c", Pod:"csi-node-driver-7trjk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.40.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7fb5ab5fac5", MAC:"ea:20:19:fb:35:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:36.492689 containerd[1641]: 2026-04-21 10:10:36.465 [INFO][3904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c" Namespace="calico-system" Pod="csi-node-driver-7trjk" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-csi--node--driver--7trjk-eth0" Apr 21 10:10:36.643328 containerd[1641]: time="2026-04-21T10:10:36.643035721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:36.643328 containerd[1641]: time="2026-04-21T10:10:36.643104646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:36.643328 containerd[1641]: time="2026-04-21T10:10:36.643115292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:36.643328 containerd[1641]: time="2026-04-21T10:10:36.643202733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.524 [INFO][3931] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.524 [INFO][3931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" iface="eth0" netns="/var/run/netns/cni-95af0adb-209a-10d7-de27-0f8eaa0b0fbc" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.524 [INFO][3931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" iface="eth0" netns="/var/run/netns/cni-95af0adb-209a-10d7-de27-0f8eaa0b0fbc" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.526 [INFO][3931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" iface="eth0" netns="/var/run/netns/cni-95af0adb-209a-10d7-de27-0f8eaa0b0fbc" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.527 [INFO][3931] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.527 [INFO][3931] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.680 [INFO][4035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.680 [INFO][4035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.680 [INFO][4035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.700 [WARNING][4035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.700 [INFO][4035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.702 [INFO][4035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.729107 containerd[1641]: 2026-04-21 10:10:36.714 [INFO][3931] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:36.734969 systemd[1]: run-netns-cni\x2d95af0adb\x2d209a\x2d10d7\x2dde27\x2d0f8eaa0b0fbc.mount: Deactivated successfully. Apr 21 10:10:36.738163 containerd[1641]: time="2026-04-21T10:10:36.737982004Z" level=info msg="TearDown network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\" successfully" Apr 21 10:10:36.738163 containerd[1641]: time="2026-04-21T10:10:36.738032450Z" level=info msg="StopPodSandbox for \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\" returns successfully" Apr 21 10:10:36.738787 containerd[1641]: time="2026-04-21T10:10:36.738769737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-wzcc5,Uid:062b1fa6-3260-4a3c-bfe5-a369e62e02c2,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.488 [INFO][3986] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.488 [INFO][3986] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" iface="eth0" netns="/var/run/netns/cni-2c7b0d1e-44bd-5f6b-1855-da1f17e6b2fc" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.488 [INFO][3986] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" iface="eth0" netns="/var/run/netns/cni-2c7b0d1e-44bd-5f6b-1855-da1f17e6b2fc" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.488 [INFO][3986] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" iface="eth0" netns="/var/run/netns/cni-2c7b0d1e-44bd-5f6b-1855-da1f17e6b2fc" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.488 [INFO][3986] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.489 [INFO][3986] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.681 [INFO][4024] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.681 [INFO][4024] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.717 [INFO][4024] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.728 [WARNING][4024] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.728 [INFO][4024] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.732 [INFO][4024] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.748355 containerd[1641]: 2026-04-21 10:10:36.742 [INFO][3986] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.552 [INFO][4005] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.552 [INFO][4005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" iface="eth0" netns="/var/run/netns/cni-68c2920c-6e8d-f211-c6c3-a10fa88b7397" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.554 [INFO][4005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" iface="eth0" netns="/var/run/netns/cni-68c2920c-6e8d-f211-c6c3-a10fa88b7397" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.565 [INFO][4005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" iface="eth0" netns="/var/run/netns/cni-68c2920c-6e8d-f211-c6c3-a10fa88b7397" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.565 [INFO][4005] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.565 [INFO][4005] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.704 [INFO][4038] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.704 [INFO][4038] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.704 [INFO][4038] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.716 [WARNING][4038] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.716 [INFO][4038] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.717 [INFO][4038] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.748706 containerd[1641]: 2026-04-21 10:10:36.740 [INFO][4005] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:36.749856 containerd[1641]: time="2026-04-21T10:10:36.749025707Z" level=info msg="TearDown network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\" successfully" Apr 21 10:10:36.749856 containerd[1641]: time="2026-04-21T10:10:36.749045797Z" level=info msg="StopPodSandbox for \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\" returns successfully" Apr 21 10:10:36.750264 containerd[1641]: time="2026-04-21T10:10:36.750194833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-ft4jt,Uid:a97df2fa-3c38-4d58-b70b-3760e19c0ab6,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:36.751033 containerd[1641]: time="2026-04-21T10:10:36.751000622Z" level=info msg="TearDown network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\" successfully" Apr 21 10:10:36.751033 containerd[1641]: time="2026-04-21T10:10:36.751019612Z" level=info msg="StopPodSandbox for \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\" returns successfully" Apr 21 10:10:36.753995 containerd[1641]: time="2026-04-21T10:10:36.751406262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f669854f8-7l8ws,Uid:62a145a7-f01c-43dc-b01f-ede287e53eec,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:36.754814 systemd[1]: run-netns-cni\x2d68c2920c\x2d6e8d\x2df211\x2dc6c3\x2da10fa88b7397.mount: Deactivated successfully. Apr 21 10:10:36.778119 containerd[1641]: time="2026-04-21T10:10:36.778086355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7trjk,Uid:83f3b362-fdfe-49c7-9fa1-a2d602501612,Namespace:calico-system,Attempt:0,} returns sandbox id \"19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c\"" Apr 21 10:10:36.783357 containerd[1641]: time="2026-04-21T10:10:36.783296709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.698 [INFO][4000] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.698 [INFO][4000] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" iface="eth0" netns="/var/run/netns/cni-f256ed2b-b0d5-d5a7-39c1-86d6b6c0c7e1" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.699 [INFO][4000] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" iface="eth0" netns="/var/run/netns/cni-f256ed2b-b0d5-d5a7-39c1-86d6b6c0c7e1" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.700 [INFO][4000] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" iface="eth0" netns="/var/run/netns/cni-f256ed2b-b0d5-d5a7-39c1-86d6b6c0c7e1" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.700 [INFO][4000] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.700 [INFO][4000] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.760 [INFO][4095] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.762 [INFO][4095] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.763 [INFO][4095] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.769 [WARNING][4095] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.769 [INFO][4095] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.772 [INFO][4095] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.787729 containerd[1641]: 2026-04-21 10:10:36.780 [INFO][4000] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:36.788665 containerd[1641]: time="2026-04-21T10:10:36.788182946Z" level=info msg="TearDown network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\" successfully" Apr 21 10:10:36.788665 containerd[1641]: time="2026-04-21T10:10:36.788200702Z" level=info msg="StopPodSandbox for \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\" returns successfully" Apr 21 10:10:36.789148 containerd[1641]: time="2026-04-21T10:10:36.788984911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdrhd,Uid:1428411e-a64c-4ee4-8cf2-81f0993ef792,Namespace:kube-system,Attempt:1,}" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.685 [INFO][3999] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.689 [INFO][3999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" iface="eth0" netns="/var/run/netns/cni-9d7827fc-7081-2597-9619-b6cb636cad72" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.689 [INFO][3999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" iface="eth0" netns="/var/run/netns/cni-9d7827fc-7081-2597-9619-b6cb636cad72" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.689 [INFO][3999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" iface="eth0" netns="/var/run/netns/cni-9d7827fc-7081-2597-9619-b6cb636cad72" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.689 [INFO][3999] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.689 [INFO][3999] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.773 [INFO][4092] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.773 [INFO][4092] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.773 [INFO][4092] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.778 [WARNING][4092] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.778 [INFO][4092] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.779 [INFO][4092] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.800336 containerd[1641]: 2026-04-21 10:10:36.782 [INFO][3999] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:36.800776 containerd[1641]: time="2026-04-21T10:10:36.800758390Z" level=info msg="TearDown network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\" successfully" Apr 21 10:10:36.800848 containerd[1641]: time="2026-04-21T10:10:36.800822787Z" level=info msg="StopPodSandbox for \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\" returns successfully" Apr 21 10:10:36.801446 containerd[1641]: time="2026-04-21T10:10:36.801288266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pwqjb,Uid:badb72fe-d615-4587-8706-aaaa53adf7ae,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.633 [INFO][3983] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.634 [INFO][3983] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" iface="eth0" netns="/var/run/netns/cni-68304390-6548-8e35-7264-a35904f37dfb" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.634 [INFO][3983] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" iface="eth0" netns="/var/run/netns/cni-68304390-6548-8e35-7264-a35904f37dfb" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.636 [INFO][3983] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" iface="eth0" netns="/var/run/netns/cni-68304390-6548-8e35-7264-a35904f37dfb" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.636 [INFO][3983] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.636 [INFO][3983] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.763 [INFO][4067] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.763 [INFO][4067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.779 [INFO][4067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.793 [WARNING][4067] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.793 [INFO][4067] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.796 [INFO][4067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.807057 containerd[1641]: 2026-04-21 10:10:36.803 [INFO][3983] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:36.807955 containerd[1641]: time="2026-04-21T10:10:36.807444499Z" level=info msg="TearDown network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\" successfully" Apr 21 10:10:36.807955 containerd[1641]: time="2026-04-21T10:10:36.807461054Z" level=info msg="StopPodSandbox for \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\" returns successfully" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.671 [INFO][3972] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.671 [INFO][3972] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" iface="eth0" netns="/var/run/netns/cni-e9a326e3-7340-0979-af7a-d7623d55bbf5" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.672 [INFO][3972] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" iface="eth0" netns="/var/run/netns/cni-e9a326e3-7340-0979-af7a-d7623d55bbf5" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.674 [INFO][3972] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" iface="eth0" netns="/var/run/netns/cni-e9a326e3-7340-0979-af7a-d7623d55bbf5" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.674 [INFO][3972] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.674 [INFO][3972] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.816 [INFO][4086] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.816 [INFO][4086] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.817 [INFO][4086] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.828 [WARNING][4086] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.828 [INFO][4086] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.831 [INFO][4086] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.850043 containerd[1641]: 2026-04-21 10:10:36.843 [INFO][3972] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:36.850677 containerd[1641]: time="2026-04-21T10:10:36.850527614Z" level=info msg="TearDown network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\" successfully" Apr 21 10:10:36.850677 containerd[1641]: time="2026-04-21T10:10:36.850571419Z" level=info msg="StopPodSandbox for \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\" returns successfully" Apr 21 10:10:36.851689 containerd[1641]: time="2026-04-21T10:10:36.851424380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z6wr9,Uid:570e5e68-cc52-417a-8289-d4541cfa3d48,Namespace:kube-system,Attempt:1,}" Apr 21 10:10:36.919338 kubelet[2751]: I0421 10:10:36.918548 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-ca-bundle\") pod \"e8f481da-cd9d-4039-816a-786b14831a9e\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " Apr 21 10:10:36.924171 kubelet[2751]: I0421 10:10:36.921562 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-nginx-config\") pod \"e8f481da-cd9d-4039-816a-786b14831a9e\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " Apr 21 10:10:36.924171 kubelet[2751]: I0421 10:10:36.921583 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-backend-key-pair\") pod \"e8f481da-cd9d-4039-816a-786b14831a9e\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " Apr 21 10:10:36.924171 kubelet[2751]: I0421 10:10:36.921607 2751 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqkzx\" (UniqueName: \"kubernetes.io/projected/e8f481da-cd9d-4039-816a-786b14831a9e-kube-api-access-xqkzx\") pod \"e8f481da-cd9d-4039-816a-786b14831a9e\" (UID: \"e8f481da-cd9d-4039-816a-786b14831a9e\") " Apr 21 10:10:36.929857 kubelet[2751]: I0421 10:10:36.929267 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e8f481da-cd9d-4039-816a-786b14831a9e" (UID: "e8f481da-cd9d-4039-816a-786b14831a9e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:10:36.931856 kubelet[2751]: I0421 10:10:36.931533 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "e8f481da-cd9d-4039-816a-786b14831a9e" (UID: "e8f481da-cd9d-4039-816a-786b14831a9e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:10:36.940124 kubelet[2751]: I0421 10:10:36.939718 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f481da-cd9d-4039-816a-786b14831a9e-kube-api-access-xqkzx" (OuterVolumeSpecName: "kube-api-access-xqkzx") pod "e8f481da-cd9d-4039-816a-786b14831a9e" (UID: "e8f481da-cd9d-4039-816a-786b14831a9e"). InnerVolumeSpecName "kube-api-access-xqkzx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:10:36.945951 kubelet[2751]: I0421 10:10:36.943405 2751 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e8f481da-cd9d-4039-816a-786b14831a9e" (UID: "e8f481da-cd9d-4039-816a-786b14831a9e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:10:36.949082 systemd-networkd[1269]: calib74f08265d0: Link UP Apr 21 10:10:36.951316 systemd-networkd[1269]: calib74f08265d0: Gained carrier Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.855 [ERROR][4133] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.866 [INFO][4133] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0 calico-apiserver-7f8dccf684- calico-system a97df2fa-3c38-4d58-b70b-3760e19c0ab6 887 0 2026-04-21 10:10:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f8dccf684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f calico-apiserver-7f8dccf684-ft4jt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib74f08265d0 [] [] }} ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.866 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.894 [INFO][4182] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" HandleID="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.901 [INFO][4182] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" HandleID="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdb90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-553501566f", "pod":"calico-apiserver-7f8dccf684-ft4jt", "timestamp":"2026-04-21 10:10:36.894188946 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000254dc0)} Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.901 [INFO][4182] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.901 [INFO][4182] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.901 [INFO][4182] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.907 [INFO][4182] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.913 [INFO][4182] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.919 [INFO][4182] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.922 [INFO][4182] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.925 [INFO][4182] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.925 [INFO][4182] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.927 [INFO][4182] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131 Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.932 [INFO][4182] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.942 [INFO][4182] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.66/26] block=192.168.40.64/26 handle="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.944 [INFO][4182] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.66/26] handle="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.944 [INFO][4182] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:36.995323 containerd[1641]: 2026-04-21 10:10:36.944 [INFO][4182] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.66/26] IPv6=[] ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" HandleID="k8s-pod-network.0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.995765 containerd[1641]: 2026-04-21 10:10:36.946 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"a97df2fa-3c38-4d58-b70b-3760e19c0ab6", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"calico-apiserver-7f8dccf684-ft4jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib74f08265d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:36.995765 containerd[1641]: 2026-04-21 10:10:36.946 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.66/32] ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.995765 containerd[1641]: 2026-04-21 10:10:36.946 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib74f08265d0 ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.995765 containerd[1641]: 2026-04-21 10:10:36.952 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:36.995765 containerd[1641]: 2026-04-21 10:10:36.960 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"a97df2fa-3c38-4d58-b70b-3760e19c0ab6", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131", Pod:"calico-apiserver-7f8dccf684-ft4jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib74f08265d0", MAC:"e6:d6:ba:c9:c6:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:36.995765 containerd[1641]: 2026-04-21 10:10:36.975 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-ft4jt" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:37.023028 kubelet[2751]: I0421 10:10:37.022616 2751 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-ca-bundle\") on node \"ci-4081-3-7-e-553501566f\" DevicePath \"\"" Apr 21 10:10:37.023028 kubelet[2751]: I0421 10:10:37.022640 2751 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e8f481da-cd9d-4039-816a-786b14831a9e-nginx-config\") on node \"ci-4081-3-7-e-553501566f\" DevicePath \"\"" Apr 21 10:10:37.023028 kubelet[2751]: I0421 10:10:37.022648 2751 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e8f481da-cd9d-4039-816a-786b14831a9e-whisker-backend-key-pair\") on node \"ci-4081-3-7-e-553501566f\" DevicePath \"\"" Apr 21 10:10:37.023028 kubelet[2751]: I0421 10:10:37.022679 2751 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xqkzx\" (UniqueName: \"kubernetes.io/projected/e8f481da-cd9d-4039-816a-786b14831a9e-kube-api-access-xqkzx\") on node \"ci-4081-3-7-e-553501566f\" DevicePath \"\"" Apr 21 10:10:37.165353 systemd-networkd[1269]: cali6439df04bd9: Link UP Apr 21 10:10:37.165522 systemd-networkd[1269]: cali6439df04bd9: Gained carrier Apr 21 10:10:37.184853 containerd[1641]: time="2026-04-21T10:10:37.178769777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:37.184853 containerd[1641]: time="2026-04-21T10:10:37.178808966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:37.184853 containerd[1641]: time="2026-04-21T10:10:37.178816758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.184853 containerd[1641]: time="2026-04-21T10:10:37.178937398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:36.816 [ERROR][4122] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:36.835 [INFO][4122] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0 calico-apiserver-7f8dccf684- calico-system 062b1fa6-3260-4a3c-bfe5-a369e62e02c2 886 0 2026-04-21 10:10:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f8dccf684 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f calico-apiserver-7f8dccf684-wzcc5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali6439df04bd9 [] [] }} ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:36.835 [INFO][4122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:36.998 [INFO][4174] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" HandleID="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.033 [INFO][4174] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" HandleID="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-553501566f", "pod":"calico-apiserver-7f8dccf684-wzcc5", "timestamp":"2026-04-21 10:10:36.998811428 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001c0b00)} Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.033 [INFO][4174] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.033 [INFO][4174] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.033 [INFO][4174] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.046 [INFO][4174] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.057 [INFO][4174] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.073 [INFO][4174] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.085 [INFO][4174] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.091 [INFO][4174] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.091 [INFO][4174] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.095 [INFO][4174] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820 Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.106 [INFO][4174] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.114 [INFO][4174] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.67/26] block=192.168.40.64/26 handle="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.114 [INFO][4174] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.67/26] handle="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.114 [INFO][4174] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:37.209593 containerd[1641]: 2026-04-21 10:10:37.114 [INFO][4174] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.67/26] IPv6=[] ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" HandleID="k8s-pod-network.de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.210538 containerd[1641]: 2026-04-21 10:10:37.158 [INFO][4122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"062b1fa6-3260-4a3c-bfe5-a369e62e02c2", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"calico-apiserver-7f8dccf684-wzcc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6439df04bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.210538 containerd[1641]: 2026-04-21 10:10:37.158 [INFO][4122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.67/32] ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.210538 containerd[1641]: 2026-04-21 10:10:37.158 [INFO][4122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6439df04bd9 ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.210538 containerd[1641]: 2026-04-21 10:10:37.167 [INFO][4122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.210538 containerd[1641]: 2026-04-21 10:10:37.174 [INFO][4122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"062b1fa6-3260-4a3c-bfe5-a369e62e02c2", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820", Pod:"calico-apiserver-7f8dccf684-wzcc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6439df04bd9", MAC:"a6:b4:51:af:3c:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.210538 containerd[1641]: 2026-04-21 10:10:37.197 [INFO][4122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820" Namespace="calico-system" Pod="calico-apiserver-7f8dccf684-wzcc5" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:37.320481 containerd[1641]: time="2026-04-21T10:10:37.320170101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:37.320481 containerd[1641]: time="2026-04-21T10:10:37.320242510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:37.320918 containerd[1641]: time="2026-04-21T10:10:37.320258073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.327329 containerd[1641]: time="2026-04-21T10:10:37.327211633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.393812 systemd-networkd[1269]: cali355f1e45863: Link UP Apr 21 10:10:37.394888 systemd-networkd[1269]: cali355f1e45863: Gained carrier Apr 21 10:10:37.402877 kubelet[2751]: I0421 10:10:37.402116 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.037 [ERROR][4151] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.066 [INFO][4151] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0 coredns-674b8bbfcf- kube-system 1428411e-a64c-4ee4-8cf2-81f0993ef792 892 0 2026-04-21 10:10:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f coredns-674b8bbfcf-sdrhd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali355f1e45863 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.066 [INFO][4151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.237 [INFO][4313] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" HandleID="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.269 [INFO][4313] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" HandleID="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000603990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-e-553501566f", "pod":"coredns-674b8bbfcf-sdrhd", "timestamp":"2026-04-21 10:10:37.237405368 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000518c60)} Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.270 [INFO][4313] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.270 [INFO][4313] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.270 [INFO][4313] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.282 [INFO][4313] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.303 [INFO][4313] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.317 [INFO][4313] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.325 [INFO][4313] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.327 [INFO][4313] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.327 [INFO][4313] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.333 [INFO][4313] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.341 [INFO][4313] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.360 [INFO][4313] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.68/26] block=192.168.40.64/26 handle="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.360 [INFO][4313] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.68/26] handle="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.360 [INFO][4313] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:37.417769 containerd[1641]: 2026-04-21 10:10:37.360 [INFO][4313] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.68/26] IPv6=[] ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" HandleID="k8s-pod-network.5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.418278 containerd[1641]: 2026-04-21 10:10:37.379 [INFO][4151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1428411e-a64c-4ee4-8cf2-81f0993ef792", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"coredns-674b8bbfcf-sdrhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali355f1e45863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.418278 containerd[1641]: 2026-04-21 10:10:37.379 [INFO][4151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.68/32] ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.418278 containerd[1641]: 2026-04-21 10:10:37.379 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali355f1e45863 ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.418278 containerd[1641]: 2026-04-21 10:10:37.395 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.418278 containerd[1641]: 2026-04-21 10:10:37.395 [INFO][4151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1428411e-a64c-4ee4-8cf2-81f0993ef792", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a", Pod:"coredns-674b8bbfcf-sdrhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali355f1e45863", MAC:"2a:fc:03:2a:3e:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.418278 containerd[1641]: 2026-04-21 10:10:37.409 [INFO][4151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a" Namespace="kube-system" Pod="coredns-674b8bbfcf-sdrhd" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:37.422123 containerd[1641]: time="2026-04-21T10:10:37.421630004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-ft4jt,Uid:a97df2fa-3c38-4d58-b70b-3760e19c0ab6,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131\"" Apr 21 10:10:37.500422 systemd[1]: run-netns-cni\x2d68304390\x2d6548\x2d8e35\x2d7264\x2da35904f37dfb.mount: Deactivated successfully. Apr 21 10:10:37.500578 systemd[1]: run-netns-cni\x2d9d7827fc\x2d7081\x2d2597\x2d9619\x2db6cb636cad72.mount: Deactivated successfully. Apr 21 10:10:37.500690 systemd[1]: run-netns-cni\x2d2c7b0d1e\x2d44bd\x2d5f6b\x2d1855\x2dda1f17e6b2fc.mount: Deactivated successfully. Apr 21 10:10:37.500791 systemd[1]: run-netns-cni\x2de9a326e3\x2d7340\x2d0979\x2daf7a\x2dd7623d55bbf5.mount: Deactivated successfully. Apr 21 10:10:37.502958 systemd[1]: run-netns-cni\x2df256ed2b\x2db0d5\x2dd5a7\x2d39c1\x2d86d6b6c0c7e1.mount: Deactivated successfully. Apr 21 10:10:37.503077 systemd[1]: var-lib-kubelet-pods-e8f481da\x2dcd9d\x2d4039\x2d816a\x2d786b14831a9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxqkzx.mount: Deactivated successfully. Apr 21 10:10:37.503187 systemd[1]: var-lib-kubelet-pods-e8f481da\x2dcd9d\x2d4039\x2d816a\x2d786b14831a9e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:10:37.517436 systemd-networkd[1269]: cali78c8df874f0: Link UP Apr 21 10:10:37.519056 systemd-networkd[1269]: cali78c8df874f0: Gained carrier Apr 21 10:10:37.535471 kubelet[2751]: I0421 10:10:37.531803 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a8f3a6ac-dfe5-4133-92b4-9cb9343849a7-nginx-config\") pod \"whisker-6bcb98597d-gzvk6\" (UID: \"a8f3a6ac-dfe5-4133-92b4-9cb9343849a7\") " pod="calico-system/whisker-6bcb98597d-gzvk6" Apr 21 10:10:37.536045 kubelet[2751]: I0421 10:10:37.535918 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8f3a6ac-dfe5-4133-92b4-9cb9343849a7-whisker-ca-bundle\") pod \"whisker-6bcb98597d-gzvk6\" (UID: \"a8f3a6ac-dfe5-4133-92b4-9cb9343849a7\") " pod="calico-system/whisker-6bcb98597d-gzvk6" Apr 21 10:10:37.536241 kubelet[2751]: I0421 10:10:37.536197 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hwt\" (UniqueName: \"kubernetes.io/projected/a8f3a6ac-dfe5-4133-92b4-9cb9343849a7-kube-api-access-x7hwt\") pod \"whisker-6bcb98597d-gzvk6\" (UID: \"a8f3a6ac-dfe5-4133-92b4-9cb9343849a7\") " pod="calico-system/whisker-6bcb98597d-gzvk6" Apr 21 10:10:37.536487 kubelet[2751]: I0421 10:10:37.536298 2751 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8f3a6ac-dfe5-4133-92b4-9cb9343849a7-whisker-backend-key-pair\") pod \"whisker-6bcb98597d-gzvk6\" (UID: \"a8f3a6ac-dfe5-4133-92b4-9cb9343849a7\") " pod="calico-system/whisker-6bcb98597d-gzvk6" Apr 21 10:10:37.552595 systemd[1]: run-containerd-runc-k8s.io-de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820-runc.f4gkJx.mount: Deactivated successfully. Apr 21 10:10:37.556923 kernel: calico-node[4260]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.004 [ERROR][4144] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.046 [INFO][4144] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0 calico-kube-controllers-7f669854f8- calico-system 62a145a7-f01c-43dc-b01f-ede287e53eec 885 0 2026-04-21 10:10:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f669854f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f calico-kube-controllers-7f669854f8-7l8ws eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali78c8df874f0 [] [] }} ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.046 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.244 [INFO][4307] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" HandleID="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.283 [INFO][4307] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" HandleID="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002143d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-553501566f", "pod":"calico-kube-controllers-7f669854f8-7l8ws", "timestamp":"2026-04-21 10:10:37.24492512 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a8000)} Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.283 [INFO][4307] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.366 [INFO][4307] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.367 [INFO][4307] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.376 [INFO][4307] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.414 [INFO][4307] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.424 [INFO][4307] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.434 [INFO][4307] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.438 [INFO][4307] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.438 [INFO][4307] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.443 [INFO][4307] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019 Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.450 [INFO][4307] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.458 [INFO][4307] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.69/26] block=192.168.40.64/26 handle="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.458 [INFO][4307] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.69/26] handle="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.458 [INFO][4307] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:37.559256 containerd[1641]: 2026-04-21 10:10:37.458 [INFO][4307] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.69/26] IPv6=[] ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" HandleID="k8s-pod-network.9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.559693 containerd[1641]: 2026-04-21 10:10:37.471 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0", GenerateName:"calico-kube-controllers-7f669854f8-", Namespace:"calico-system", SelfLink:"", UID:"62a145a7-f01c-43dc-b01f-ede287e53eec", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f669854f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"calico-kube-controllers-7f669854f8-7l8ws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c8df874f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.559693 containerd[1641]: 2026-04-21 10:10:37.471 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.69/32] ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.559693 containerd[1641]: 2026-04-21 10:10:37.471 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali78c8df874f0 ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.559693 containerd[1641]: 2026-04-21 10:10:37.520 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.559693 containerd[1641]: 2026-04-21 10:10:37.521 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0", GenerateName:"calico-kube-controllers-7f669854f8-", Namespace:"calico-system", SelfLink:"", UID:"62a145a7-f01c-43dc-b01f-ede287e53eec", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f669854f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019", Pod:"calico-kube-controllers-7f669854f8-7l8ws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c8df874f0", MAC:"3e:62:ed:c7:08:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.559693 containerd[1641]: 2026-04-21 10:10:37.552 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019" Namespace="calico-system" Pod="calico-kube-controllers-7f669854f8-7l8ws" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:37.574391 containerd[1641]: time="2026-04-21T10:10:37.571938083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:37.574391 containerd[1641]: time="2026-04-21T10:10:37.574344086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:37.574391 containerd[1641]: time="2026-04-21T10:10:37.574353550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.575025 containerd[1641]: time="2026-04-21T10:10:37.574430335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.630191 systemd-networkd[1269]: calied79d9aa75c: Link UP Apr 21 10:10:37.635221 systemd-networkd[1269]: calied79d9aa75c: Gained carrier Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.065 [ERROR][4165] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.183 [INFO][4165] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0 goldmane-5b85766d88- calico-system badb72fe-d615-4587-8706-aaaa53adf7ae 891 0 2026-04-21 10:10:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f goldmane-5b85766d88-pwqjb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calied79d9aa75c [] [] }} ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.185 [INFO][4165] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.415 [INFO][4341] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" HandleID="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.453 [INFO][4341] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" HandleID="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003de470), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-553501566f", "pod":"goldmane-5b85766d88-pwqjb", "timestamp":"2026-04-21 10:10:37.415571978 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112420)} Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.453 [INFO][4341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.468 [INFO][4341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.468 [INFO][4341] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.504 [INFO][4341] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.540 [INFO][4341] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.563 [INFO][4341] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.568 [INFO][4341] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.573 [INFO][4341] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.573 [INFO][4341] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.576 [INFO][4341] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.589 [INFO][4341] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.599 [INFO][4341] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.70/26] block=192.168.40.64/26 handle="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.601 [INFO][4341] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.70/26] handle="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.601 [INFO][4341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:37.672894 containerd[1641]: 2026-04-21 10:10:37.601 [INFO][4341] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.70/26] IPv6=[] ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" HandleID="k8s-pod-network.3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.674584 containerd[1641]: 2026-04-21 10:10:37.618 [INFO][4165] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"badb72fe-d615-4587-8706-aaaa53adf7ae", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"goldmane-5b85766d88-pwqjb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied79d9aa75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.674584 containerd[1641]: 2026-04-21 10:10:37.618 [INFO][4165] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.70/32] ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.674584 containerd[1641]: 2026-04-21 10:10:37.618 [INFO][4165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied79d9aa75c ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.674584 containerd[1641]: 2026-04-21 10:10:37.628 [INFO][4165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.674584 containerd[1641]: 2026-04-21 10:10:37.629 [INFO][4165] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"badb72fe-d615-4587-8706-aaaa53adf7ae", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc", Pod:"goldmane-5b85766d88-pwqjb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied79d9aa75c", MAC:"e6:c4:da:5e:26:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.674584 containerd[1641]: 2026-04-21 10:10:37.642 [INFO][4165] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc" Namespace="calico-system" Pod="goldmane-5b85766d88-pwqjb" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:37.740172 containerd[1641]: time="2026-04-21T10:10:37.740139208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f8dccf684-wzcc5,Uid:062b1fa6-3260-4a3c-bfe5-a369e62e02c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820\"" Apr 21 10:10:37.749024 containerd[1641]: time="2026-04-21T10:10:37.746302231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:37.749024 containerd[1641]: time="2026-04-21T10:10:37.747186600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:37.749024 containerd[1641]: time="2026-04-21T10:10:37.747196254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.749024 containerd[1641]: time="2026-04-21T10:10:37.747919500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.770943 containerd[1641]: time="2026-04-21T10:10:37.765247989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:37.770943 containerd[1641]: time="2026-04-21T10:10:37.765320448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:37.770943 containerd[1641]: time="2026-04-21T10:10:37.765342791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.770943 containerd[1641]: time="2026-04-21T10:10:37.765611485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.784867 systemd-networkd[1269]: calib9aaf3855ae: Link UP Apr 21 10:10:37.785074 systemd-networkd[1269]: calib9aaf3855ae: Gained carrier Apr 21 10:10:37.820097 containerd[1641]: time="2026-04-21T10:10:37.820066690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bcb98597d-gzvk6,Uid:a8f3a6ac-dfe5-4133-92b4-9cb9343849a7,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.179 [ERROR][4224] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.211 [INFO][4224] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0 coredns-674b8bbfcf- kube-system 570e5e68-cc52-417a-8289-d4541cfa3d48 890 0 2026-04-21 10:10:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f coredns-674b8bbfcf-z6wr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib9aaf3855ae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.212 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.413 [INFO][4355] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" HandleID="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.475 [INFO][4355] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" HandleID="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003dff20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-e-553501566f", "pod":"coredns-674b8bbfcf-z6wr9", "timestamp":"2026-04-21 10:10:37.41349533 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000112c60)} Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.475 [INFO][4355] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.602 [INFO][4355] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.602 [INFO][4355] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.609 [INFO][4355] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.667 [INFO][4355] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.690 [INFO][4355] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.692 [INFO][4355] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.699 [INFO][4355] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.699 [INFO][4355] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.706 [INFO][4355] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.712 [INFO][4355] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.731 [INFO][4355] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.71/26] block=192.168.40.64/26 handle="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.731 [INFO][4355] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.71/26] handle="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.731 [INFO][4355] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:37.825594 containerd[1641]: 2026-04-21 10:10:37.731 [INFO][4355] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.71/26] IPv6=[] ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" HandleID="k8s-pod-network.f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.826105 containerd[1641]: 2026-04-21 10:10:37.748 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"570e5e68-cc52-417a-8289-d4541cfa3d48", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"coredns-674b8bbfcf-z6wr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9aaf3855ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.826105 containerd[1641]: 2026-04-21 10:10:37.748 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.71/32] ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.826105 containerd[1641]: 2026-04-21 10:10:37.748 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9aaf3855ae ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.826105 containerd[1641]: 2026-04-21 10:10:37.786 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.826105 containerd[1641]: 2026-04-21 10:10:37.790 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"570e5e68-cc52-417a-8289-d4541cfa3d48", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d", Pod:"coredns-674b8bbfcf-z6wr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9aaf3855ae", MAC:"aa:70:52:1f:33:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:37.826105 containerd[1641]: 2026-04-21 10:10:37.818 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d" Namespace="kube-system" Pod="coredns-674b8bbfcf-z6wr9" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:37.865668 containerd[1641]: time="2026-04-21T10:10:37.863246394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sdrhd,Uid:1428411e-a64c-4ee4-8cf2-81f0993ef792,Namespace:kube-system,Attempt:1,} returns sandbox id \"5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a\"" Apr 21 10:10:37.905542 containerd[1641]: time="2026-04-21T10:10:37.905145916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:37.905542 containerd[1641]: time="2026-04-21T10:10:37.905245978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:37.905542 containerd[1641]: time="2026-04-21T10:10:37.905256814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.905542 containerd[1641]: time="2026-04-21T10:10:37.905329924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:37.908730 containerd[1641]: time="2026-04-21T10:10:37.907981475Z" level=info msg="CreateContainer within sandbox \"5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:10:37.948595 containerd[1641]: time="2026-04-21T10:10:37.948353644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f669854f8-7l8ws,Uid:62a145a7-f01c-43dc-b01f-ede287e53eec,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019\"" Apr 21 10:10:37.952664 containerd[1641]: time="2026-04-21T10:10:37.952633059Z" level=info msg="CreateContainer within sandbox \"5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb0fa4a2244492188289f1eeef82d8548bdd994c929e5f2cb7e7ffc6bfbd6618\"" Apr 21 10:10:37.954968 containerd[1641]: time="2026-04-21T10:10:37.953458899Z" level=info msg="StartContainer for \"cb0fa4a2244492188289f1eeef82d8548bdd994c929e5f2cb7e7ffc6bfbd6618\"" Apr 21 10:10:38.019850 containerd[1641]: time="2026-04-21T10:10:38.018660598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z6wr9,Uid:570e5e68-cc52-417a-8289-d4541cfa3d48,Namespace:kube-system,Attempt:1,} returns sandbox id \"f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d\"" Apr 21 10:10:38.023330 containerd[1641]: time="2026-04-21T10:10:38.022901966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-pwqjb,Uid:badb72fe-d615-4587-8706-aaaa53adf7ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc\"" Apr 21 10:10:38.027425 containerd[1641]: time="2026-04-21T10:10:38.026319708Z" level=info msg="CreateContainer within sandbox \"f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:10:38.063358 containerd[1641]: time="2026-04-21T10:10:38.062923225Z" level=info msg="CreateContainer within sandbox \"f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7db080a3c8707f03949556a11674c4e3a106d75c2fd14f0c7959f3bb579621da\"" Apr 21 10:10:38.063539 containerd[1641]: time="2026-04-21T10:10:38.063486170Z" level=info msg="StartContainer for \"7db080a3c8707f03949556a11674c4e3a106d75c2fd14f0c7959f3bb579621da\"" Apr 21 10:10:38.097517 containerd[1641]: time="2026-04-21T10:10:38.097484726Z" level=info msg="StartContainer for \"cb0fa4a2244492188289f1eeef82d8548bdd994c929e5f2cb7e7ffc6bfbd6618\" returns successfully" Apr 21 10:10:38.168684 systemd-networkd[1269]: cali4672cbcc96c: Link UP Apr 21 10:10:38.169980 systemd-networkd[1269]: cali4672cbcc96c: Gained carrier Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.057 [INFO][4586] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0 whisker-6bcb98597d- calico-system a8f3a6ac-dfe5-4133-92b4-9cb9343849a7 921 0 2026-04-21 10:10:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bcb98597d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-e-553501566f whisker-6bcb98597d-gzvk6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4672cbcc96c [] [] }} ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.058 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.111 [INFO][4687] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" HandleID="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.119 [INFO][4687] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" HandleID="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-e-553501566f", "pod":"whisker-6bcb98597d-gzvk6", "timestamp":"2026-04-21 10:10:38.111349452 +0000 UTC"}, Hostname:"ci-4081-3-7-e-553501566f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00016a6e0)} Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.121 [INFO][4687] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.121 [INFO][4687] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.121 [INFO][4687] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-e-553501566f' Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.124 [INFO][4687] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.129 [INFO][4687] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.135 [INFO][4687] ipam/ipam.go 526: Trying affinity for 192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.138 [INFO][4687] ipam/ipam.go 160: Attempting to load block cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.141 [INFO][4687] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.40.64/26 host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.141 [INFO][4687] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.40.64/26 handle="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.143 [INFO][4687] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270 Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.148 [INFO][4687] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.40.64/26 handle="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.155 [INFO][4687] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.40.72/26] block=192.168.40.64/26 handle="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.155 [INFO][4687] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.40.72/26] handle="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" host="ci-4081-3-7-e-553501566f" Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.155 [INFO][4687] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:38.199760 containerd[1641]: 2026-04-21 10:10:38.155 [INFO][4687] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.40.72/26] IPv6=[] ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" HandleID="k8s-pod-network.e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.201116 containerd[1641]: 2026-04-21 10:10:38.159 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0", GenerateName:"whisker-6bcb98597d-", Namespace:"calico-system", SelfLink:"", UID:"a8f3a6ac-dfe5-4133-92b4-9cb9343849a7", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bcb98597d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"", Pod:"whisker-6bcb98597d-gzvk6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4672cbcc96c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:38.201116 containerd[1641]: 2026-04-21 10:10:38.159 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.40.72/32] ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.201116 containerd[1641]: 2026-04-21 10:10:38.159 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4672cbcc96c ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.201116 containerd[1641]: 2026-04-21 10:10:38.169 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.201116 containerd[1641]: 2026-04-21 10:10:38.170 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0", GenerateName:"whisker-6bcb98597d-", Namespace:"calico-system", SelfLink:"", UID:"a8f3a6ac-dfe5-4133-92b4-9cb9343849a7", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bcb98597d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270", Pod:"whisker-6bcb98597d-gzvk6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.40.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4672cbcc96c", MAC:"92:88:70:ea:28:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:38.201116 containerd[1641]: 2026-04-21 10:10:38.195 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270" Namespace="calico-system" Pod="whisker-6bcb98597d-gzvk6" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--6bcb98597d--gzvk6-eth0" Apr 21 10:10:38.220150 containerd[1641]: time="2026-04-21T10:10:38.220050798Z" level=info msg="StartContainer for \"7db080a3c8707f03949556a11674c4e3a106d75c2fd14f0c7959f3bb579621da\" returns successfully" Apr 21 10:10:38.241441 kubelet[2751]: I0421 10:10:38.241413 2751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f481da-cd9d-4039-816a-786b14831a9e" path="/var/lib/kubelet/pods/e8f481da-cd9d-4039-816a-786b14831a9e/volumes" Apr 21 10:10:38.250896 containerd[1641]: time="2026-04-21T10:10:38.250536340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:38.251283 containerd[1641]: time="2026-04-21T10:10:38.251142540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:38.251283 containerd[1641]: time="2026-04-21T10:10:38.251155289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:38.251423 containerd[1641]: time="2026-04-21T10:10:38.251372986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:38.292903 systemd-networkd[1269]: calib74f08265d0: Gained IPv6LL Apr 21 10:10:38.308629 systemd-networkd[1269]: vxlan.calico: Link UP Apr 21 10:10:38.308635 systemd-networkd[1269]: vxlan.calico: Gained carrier Apr 21 10:10:38.368705 containerd[1641]: time="2026-04-21T10:10:38.368677379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bcb98597d-gzvk6,Uid:a8f3a6ac-dfe5-4133-92b4-9cb9343849a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270\"" Apr 21 10:10:38.423485 kubelet[2751]: I0421 10:10:38.423440 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sdrhd" podStartSLOduration=35.423426679 podStartE2EDuration="35.423426679s" podCreationTimestamp="2026-04-21 10:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:38.421016872 +0000 UTC m=+40.269114730" watchObservedRunningTime="2026-04-21 10:10:38.423426679 +0000 UTC m=+40.271524537" Apr 21 10:10:38.485053 systemd-networkd[1269]: cali7fb5ab5fac5: Gained IPv6LL Apr 21 10:10:38.544063 systemd-networkd[1269]: cali355f1e45863: Gained IPv6LL Apr 21 10:10:38.864616 systemd-networkd[1269]: calib9aaf3855ae: Gained IPv6LL Apr 21 10:10:38.929077 systemd-networkd[1269]: calied79d9aa75c: Gained IPv6LL Apr 21 10:10:38.994526 systemd-networkd[1269]: cali6439df04bd9: Gained IPv6LL Apr 21 10:10:39.040411 containerd[1641]: time="2026-04-21T10:10:39.040252368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:39.041066 containerd[1641]: time="2026-04-21T10:10:39.041036835Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:10:39.041877 containerd[1641]: time="2026-04-21T10:10:39.041854363Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:39.043665 containerd[1641]: time="2026-04-21T10:10:39.043630559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:39.044174 containerd[1641]: time="2026-04-21T10:10:39.044144941Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.260231697s" Apr 21 10:10:39.044218 containerd[1641]: time="2026-04-21T10:10:39.044175417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:10:39.050403 containerd[1641]: time="2026-04-21T10:10:39.049808674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:10:39.053200 containerd[1641]: time="2026-04-21T10:10:39.053174157Z" level=info msg="CreateContainer within sandbox \"19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:10:39.073746 containerd[1641]: time="2026-04-21T10:10:39.073715987Z" level=info msg="CreateContainer within sandbox \"19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6c1e1c4d03aafc884c1ce2c52f4e861d4506eadc84801631fb6ba574346c7169\"" Apr 21 10:10:39.075487 containerd[1641]: time="2026-04-21T10:10:39.074478361Z" level=info msg="StartContainer for \"6c1e1c4d03aafc884c1ce2c52f4e861d4506eadc84801631fb6ba574346c7169\"" Apr 21 10:10:39.136618 containerd[1641]: time="2026-04-21T10:10:39.136488709Z" level=info msg="StartContainer for \"6c1e1c4d03aafc884c1ce2c52f4e861d4506eadc84801631fb6ba574346c7169\" returns successfully" Apr 21 10:10:39.505078 systemd-networkd[1269]: cali78c8df874f0: Gained IPv6LL Apr 21 10:10:39.569206 systemd-networkd[1269]: cali4672cbcc96c: Gained IPv6LL Apr 21 10:10:39.952508 systemd-networkd[1269]: vxlan.calico: Gained IPv6LL Apr 21 10:10:42.590261 containerd[1641]: time="2026-04-21T10:10:42.590214897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:42.592027 containerd[1641]: time="2026-04-21T10:10:42.591924834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:10:42.593174 containerd[1641]: time="2026-04-21T10:10:42.593061330Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:42.599348 containerd[1641]: time="2026-04-21T10:10:42.598627345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:42.599348 containerd[1641]: time="2026-04-21T10:10:42.599238222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.549405332s" Apr 21 10:10:42.599348 containerd[1641]: time="2026-04-21T10:10:42.599263250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:10:42.600298 containerd[1641]: time="2026-04-21T10:10:42.600277813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:10:42.621175 containerd[1641]: time="2026-04-21T10:10:42.620689242Z" level=info msg="CreateContainer within sandbox \"0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:10:42.632208 containerd[1641]: time="2026-04-21T10:10:42.632168173Z" level=info msg="CreateContainer within sandbox \"0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1285a7820109aa7b1507db3cc581f64c9b79929aa299e226da3bf93f72e2d514\"" Apr 21 10:10:42.632894 containerd[1641]: time="2026-04-21T10:10:42.632871279Z" level=info msg="StartContainer for \"1285a7820109aa7b1507db3cc581f64c9b79929aa299e226da3bf93f72e2d514\"" Apr 21 10:10:42.722064 containerd[1641]: time="2026-04-21T10:10:42.722027039Z" level=info msg="StartContainer for \"1285a7820109aa7b1507db3cc581f64c9b79929aa299e226da3bf93f72e2d514\" returns successfully" Apr 21 10:10:42.905001 kubelet[2751]: I0421 10:10:42.904852 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:42.929800 systemd[1]: run-containerd-runc-k8s.io-9b509d46977fa89947568bdd0632c7bbf503951233d710e2d4a0e6481202bb63-runc.nVL2UB.mount: Deactivated successfully. Apr 21 10:10:43.019562 kubelet[2751]: I0421 10:10:43.019170 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z6wr9" podStartSLOduration=40.019157348 podStartE2EDuration="40.019157348s" podCreationTimestamp="2026-04-21 10:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:38.45874726 +0000 UTC m=+40.306845108" watchObservedRunningTime="2026-04-21 10:10:43.019157348 +0000 UTC m=+44.867255206" Apr 21 10:10:43.139685 containerd[1641]: time="2026-04-21T10:10:43.139246865Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:43.140745 containerd[1641]: time="2026-04-21T10:10:43.140721539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:10:43.142059 containerd[1641]: time="2026-04-21T10:10:43.142044815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 541.741194ms" Apr 21 10:10:43.142128 containerd[1641]: time="2026-04-21T10:10:43.142118596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:10:43.144642 containerd[1641]: time="2026-04-21T10:10:43.144489195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:10:43.149055 containerd[1641]: time="2026-04-21T10:10:43.148965744Z" level=info msg="CreateContainer within sandbox \"de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:10:43.158503 containerd[1641]: time="2026-04-21T10:10:43.158141687Z" level=info msg="CreateContainer within sandbox \"de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"670fc8e69a054e7f4477ac77be6f891bf6143ba8742ebab344ee07335c628748\"" Apr 21 10:10:43.159766 containerd[1641]: time="2026-04-21T10:10:43.159728679Z" level=info msg="StartContainer for \"670fc8e69a054e7f4477ac77be6f891bf6143ba8742ebab344ee07335c628748\"" Apr 21 10:10:43.229927 containerd[1641]: time="2026-04-21T10:10:43.229889672Z" level=info msg="StartContainer for \"670fc8e69a054e7f4477ac77be6f891bf6143ba8742ebab344ee07335c628748\" returns successfully" Apr 21 10:10:43.474801 kubelet[2751]: I0421 10:10:43.473692 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f8dccf684-ft4jt" podStartSLOduration=25.298144761 podStartE2EDuration="30.473676302s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:37.424582246 +0000 UTC m=+39.272680104" lastFinishedPulling="2026-04-21 10:10:42.600113797 +0000 UTC m=+44.448211645" observedRunningTime="2026-04-21 10:10:43.473572336 +0000 UTC m=+45.321670184" watchObservedRunningTime="2026-04-21 10:10:43.473676302 +0000 UTC m=+45.321774160" Apr 21 10:10:43.485941 kubelet[2751]: I0421 10:10:43.485878 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7f8dccf684-wzcc5" podStartSLOduration=25.091103227 podStartE2EDuration="30.485863826s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:37.747950627 +0000 UTC m=+39.596048485" lastFinishedPulling="2026-04-21 10:10:43.142711236 +0000 UTC m=+44.990809084" observedRunningTime="2026-04-21 10:10:43.484324715 +0000 UTC m=+45.332422573" watchObservedRunningTime="2026-04-21 10:10:43.485863826 +0000 UTC m=+45.333961694" Apr 21 10:10:44.472648 kubelet[2751]: I0421 10:10:44.472620 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:46.621620 containerd[1641]: time="2026-04-21T10:10:46.621570618Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:46.622576 containerd[1641]: time="2026-04-21T10:10:46.622484609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:10:46.623571 containerd[1641]: time="2026-04-21T10:10:46.623407124Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:46.625525 containerd[1641]: time="2026-04-21T10:10:46.625498613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:46.626065 containerd[1641]: time="2026-04-21T10:10:46.626034277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.481527015s" Apr 21 10:10:46.626096 containerd[1641]: time="2026-04-21T10:10:46.626067246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:10:46.627615 containerd[1641]: time="2026-04-21T10:10:46.627532917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:10:46.653109 containerd[1641]: time="2026-04-21T10:10:46.652907099Z" level=info msg="CreateContainer within sandbox \"9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:10:46.717211 containerd[1641]: time="2026-04-21T10:10:46.717176658Z" level=info msg="CreateContainer within sandbox \"9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7bdbccd60570d95f9021b6bf82d33870375e661ef6ef48d4335ae3d87cc15672\"" Apr 21 10:10:46.718385 containerd[1641]: time="2026-04-21T10:10:46.718365342Z" level=info msg="StartContainer for \"7bdbccd60570d95f9021b6bf82d33870375e661ef6ef48d4335ae3d87cc15672\"" Apr 21 10:10:46.782470 containerd[1641]: time="2026-04-21T10:10:46.782207518Z" level=info msg="StartContainer for \"7bdbccd60570d95f9021b6bf82d33870375e661ef6ef48d4335ae3d87cc15672\" returns successfully" Apr 21 10:10:47.576536 kubelet[2751]: I0421 10:10:47.576213 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f669854f8-7l8ws" podStartSLOduration=25.899219375 podStartE2EDuration="34.576190202s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:37.9497562 +0000 UTC m=+39.797854058" lastFinishedPulling="2026-04-21 10:10:46.626727027 +0000 UTC m=+48.474824885" observedRunningTime="2026-04-21 10:10:47.50936357 +0000 UTC m=+49.357461458" watchObservedRunningTime="2026-04-21 10:10:47.576190202 +0000 UTC m=+49.424288090" Apr 21 10:10:47.635282 systemd[1]: run-containerd-runc-k8s.io-7bdbccd60570d95f9021b6bf82d33870375e661ef6ef48d4335ae3d87cc15672-runc.vrup1N.mount: Deactivated successfully. Apr 21 10:10:48.803015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4174908327.mount: Deactivated successfully. Apr 21 10:10:49.129269 containerd[1641]: time="2026-04-21T10:10:49.129154866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:49.130411 containerd[1641]: time="2026-04-21T10:10:49.130379975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:10:49.131316 containerd[1641]: time="2026-04-21T10:10:49.131287367Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:49.133392 containerd[1641]: time="2026-04-21T10:10:49.133357904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:49.134026 containerd[1641]: time="2026-04-21T10:10:49.133945276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.506391738s" Apr 21 10:10:49.134026 containerd[1641]: time="2026-04-21T10:10:49.133969242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:10:49.136176 containerd[1641]: time="2026-04-21T10:10:49.136062162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:10:49.138275 containerd[1641]: time="2026-04-21T10:10:49.138129445Z" level=info msg="CreateContainer within sandbox \"3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:10:49.149693 containerd[1641]: time="2026-04-21T10:10:49.149660520Z" level=info msg="CreateContainer within sandbox \"3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"646e665e5492dd0740b6f5f34fa34790cfeac9d35aafb253f2ad7439747d70e5\"" Apr 21 10:10:49.150188 containerd[1641]: time="2026-04-21T10:10:49.150101411Z" level=info msg="StartContainer for \"646e665e5492dd0740b6f5f34fa34790cfeac9d35aafb253f2ad7439747d70e5\"" Apr 21 10:10:49.209707 containerd[1641]: time="2026-04-21T10:10:49.209667311Z" level=info msg="StartContainer for \"646e665e5492dd0740b6f5f34fa34790cfeac9d35aafb253f2ad7439747d70e5\" returns successfully" Apr 21 10:10:50.544784 systemd[1]: run-containerd-runc-k8s.io-646e665e5492dd0740b6f5f34fa34790cfeac9d35aafb253f2ad7439747d70e5-runc.enxKj0.mount: Deactivated successfully. Apr 21 10:10:51.166442 containerd[1641]: time="2026-04-21T10:10:51.166391981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:51.167585 containerd[1641]: time="2026-04-21T10:10:51.167489588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:10:51.168497 containerd[1641]: time="2026-04-21T10:10:51.168462999Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:51.170364 containerd[1641]: time="2026-04-21T10:10:51.170334507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:51.171110 containerd[1641]: time="2026-04-21T10:10:51.170751503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.034670682s" Apr 21 10:10:51.171110 containerd[1641]: time="2026-04-21T10:10:51.170773947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:10:51.171588 containerd[1641]: time="2026-04-21T10:10:51.171567808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:10:51.173727 containerd[1641]: time="2026-04-21T10:10:51.173687739Z" level=info msg="CreateContainer within sandbox \"e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:10:51.195509 containerd[1641]: time="2026-04-21T10:10:51.195476135Z" level=info msg="CreateContainer within sandbox \"e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6bc659fc6c7ac67f876c4fc8c9c95a8392b754bf8ece972c39c4f1739f44de33\"" Apr 21 10:10:51.196041 containerd[1641]: time="2026-04-21T10:10:51.196016847Z" level=info msg="StartContainer for \"6bc659fc6c7ac67f876c4fc8c9c95a8392b754bf8ece972c39c4f1739f44de33\"" Apr 21 10:10:51.264651 containerd[1641]: time="2026-04-21T10:10:51.264618491Z" level=info msg="StartContainer for \"6bc659fc6c7ac67f876c4fc8c9c95a8392b754bf8ece972c39c4f1739f44de33\" returns successfully" Apr 21 10:10:53.484152 containerd[1641]: time="2026-04-21T10:10:53.484107307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:53.485250 containerd[1641]: time="2026-04-21T10:10:53.485178395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:10:53.485985 containerd[1641]: time="2026-04-21T10:10:53.485953337Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:53.487594 containerd[1641]: time="2026-04-21T10:10:53.487564064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:53.488074 containerd[1641]: time="2026-04-21T10:10:53.488050554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.316457909s" Apr 21 10:10:53.488107 containerd[1641]: time="2026-04-21T10:10:53.488077575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:10:53.489192 containerd[1641]: time="2026-04-21T10:10:53.488998497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:10:53.491870 containerd[1641]: time="2026-04-21T10:10:53.491847321Z" level=info msg="CreateContainer within sandbox \"19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:10:53.507103 containerd[1641]: time="2026-04-21T10:10:53.507036102Z" level=info msg="CreateContainer within sandbox \"19b52ec0ff4ab483858032ea3d8961557d8f19b7f4b31dd40ca5cf0a81f9de9c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3c1f284181490c6b040b0469a0cbac512c3d5155b5d43b1cd873c37e77604be5\"" Apr 21 10:10:53.508100 containerd[1641]: time="2026-04-21T10:10:53.507956334Z" level=info msg="StartContainer for \"3c1f284181490c6b040b0469a0cbac512c3d5155b5d43b1cd873c37e77604be5\"" Apr 21 10:10:53.562115 containerd[1641]: time="2026-04-21T10:10:53.562086578Z" level=info msg="StartContainer for \"3c1f284181490c6b040b0469a0cbac512c3d5155b5d43b1cd873c37e77604be5\" returns successfully" Apr 21 10:10:54.322490 kubelet[2751]: I0421 10:10:54.322378 2751 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:10:54.324616 kubelet[2751]: I0421 10:10:54.324562 2751 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:10:54.534892 kubelet[2751]: I0421 10:10:54.532620 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-pwqjb" podStartSLOduration=30.423037777 podStartE2EDuration="41.53259855s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:38.02509574 +0000 UTC m=+39.873193588" lastFinishedPulling="2026-04-21 10:10:49.134656513 +0000 UTC m=+50.982754361" observedRunningTime="2026-04-21 10:10:49.522997915 +0000 UTC m=+51.371095804" watchObservedRunningTime="2026-04-21 10:10:54.53259855 +0000 UTC m=+56.380696478" Apr 21 10:10:54.534892 kubelet[2751]: I0421 10:10:54.533718 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7trjk" podStartSLOduration=24.827830894 podStartE2EDuration="41.533705642s" podCreationTimestamp="2026-04-21 10:10:13 +0000 UTC" firstStartedPulling="2026-04-21 10:10:36.782948015 +0000 UTC m=+38.631045873" lastFinishedPulling="2026-04-21 10:10:53.488822763 +0000 UTC m=+55.336920621" observedRunningTime="2026-04-21 10:10:54.532556998 +0000 UTC m=+56.380654856" watchObservedRunningTime="2026-04-21 10:10:54.533705642 +0000 UTC m=+56.381803530" Apr 21 10:10:56.229589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount853551459.mount: Deactivated successfully. Apr 21 10:10:56.244538 containerd[1641]: time="2026-04-21T10:10:56.244495501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:56.245547 containerd[1641]: time="2026-04-21T10:10:56.245442782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:10:56.246304 containerd[1641]: time="2026-04-21T10:10:56.246273388Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:56.248130 containerd[1641]: time="2026-04-21T10:10:56.248108862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:56.248680 containerd[1641]: time="2026-04-21T10:10:56.248609573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.759592258s" Apr 21 10:10:56.248680 containerd[1641]: time="2026-04-21T10:10:56.248630915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:10:56.251944 containerd[1641]: time="2026-04-21T10:10:56.251921493Z" level=info msg="CreateContainer within sandbox \"e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:10:56.266381 containerd[1641]: time="2026-04-21T10:10:56.266335119Z" level=info msg="CreateContainer within sandbox \"e832d640e27fbd81eebe3c5016f1e06551d845754609ee412b076d221db00270\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c7bb98c42410dadf97d2e196ec57dceca3645fe24b1c66ef4423031c5ff19e34\"" Apr 21 10:10:56.267693 containerd[1641]: time="2026-04-21T10:10:56.267665326Z" level=info msg="StartContainer for \"c7bb98c42410dadf97d2e196ec57dceca3645fe24b1c66ef4423031c5ff19e34\"" Apr 21 10:10:56.335045 containerd[1641]: time="2026-04-21T10:10:56.334994970Z" level=info msg="StartContainer for \"c7bb98c42410dadf97d2e196ec57dceca3645fe24b1c66ef4423031c5ff19e34\" returns successfully" Apr 21 10:10:56.546266 kubelet[2751]: I0421 10:10:56.545915 2751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bcb98597d-gzvk6" podStartSLOduration=1.667923873 podStartE2EDuration="19.54589302s" podCreationTimestamp="2026-04-21 10:10:37 +0000 UTC" firstStartedPulling="2026-04-21 10:10:38.37136218 +0000 UTC m=+40.219460028" lastFinishedPulling="2026-04-21 10:10:56.249331317 +0000 UTC m=+58.097429175" observedRunningTime="2026-04-21 10:10:56.545300241 +0000 UTC m=+58.393398129" watchObservedRunningTime="2026-04-21 10:10:56.54589302 +0000 UTC m=+58.393990908" Apr 21 10:10:58.227553 containerd[1641]: time="2026-04-21T10:10:58.227452515Z" level=info msg="StopPodSandbox for \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\"" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.295 [WARNING][5412] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"570e5e68-cc52-417a-8289-d4541cfa3d48", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d", Pod:"coredns-674b8bbfcf-z6wr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9aaf3855ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.295 [INFO][5412] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.295 [INFO][5412] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" iface="eth0" netns="" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.295 [INFO][5412] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.295 [INFO][5412] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.322 [INFO][5422] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.322 [INFO][5422] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.322 [INFO][5422] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.327 [WARNING][5422] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.327 [INFO][5422] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.328 [INFO][5422] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.333417 containerd[1641]: 2026-04-21 10:10:58.330 [INFO][5412] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.333417 containerd[1641]: time="2026-04-21T10:10:58.332940318Z" level=info msg="TearDown network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\" successfully" Apr 21 10:10:58.333417 containerd[1641]: time="2026-04-21T10:10:58.332962481Z" level=info msg="StopPodSandbox for \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\" returns successfully" Apr 21 10:10:58.334276 containerd[1641]: time="2026-04-21T10:10:58.333466727Z" level=info msg="RemovePodSandbox for \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\"" Apr 21 10:10:58.334276 containerd[1641]: time="2026-04-21T10:10:58.333491855Z" level=info msg="Forcibly stopping sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\"" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.360 [WARNING][5437] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"570e5e68-cc52-417a-8289-d4541cfa3d48", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"f8646f4024f7aa786fd1d9cb561a7616139008d5e63a156f4af2469c1da3841d", Pod:"coredns-674b8bbfcf-z6wr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib9aaf3855ae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.360 [INFO][5437] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.360 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" iface="eth0" netns="" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.360 [INFO][5437] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.360 [INFO][5437] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.375 [INFO][5444] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.375 [INFO][5444] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.376 [INFO][5444] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.380 [WARNING][5444] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.380 [INFO][5444] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" HandleID="k8s-pod-network.ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--z6wr9-eth0" Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.381 [INFO][5444] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.384774 containerd[1641]: 2026-04-21 10:10:58.383 [INFO][5437] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764" Apr 21 10:10:58.385204 containerd[1641]: time="2026-04-21T10:10:58.384809662Z" level=info msg="TearDown network for sandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\" successfully" Apr 21 10:10:58.393110 containerd[1641]: time="2026-04-21T10:10:58.393081903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:58.393205 containerd[1641]: time="2026-04-21T10:10:58.393131978Z" level=info msg="RemovePodSandbox \"ac1126ee029191ad112d8129536516d5b9cd54907cf302f52165f769f2560764\" returns successfully" Apr 21 10:10:58.393559 containerd[1641]: time="2026-04-21T10:10:58.393545869Z" level=info msg="StopPodSandbox for \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\"" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.418 [WARNING][5458] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1428411e-a64c-4ee4-8cf2-81f0993ef792", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a", Pod:"coredns-674b8bbfcf-sdrhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali355f1e45863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.418 [INFO][5458] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.418 [INFO][5458] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" iface="eth0" netns="" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.418 [INFO][5458] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.418 [INFO][5458] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.433 [INFO][5466] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.433 [INFO][5466] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.433 [INFO][5466] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.439 [WARNING][5466] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.439 [INFO][5466] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.440 [INFO][5466] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.443808 containerd[1641]: 2026-04-21 10:10:58.442 [INFO][5458] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.444263 containerd[1641]: time="2026-04-21T10:10:58.443876695Z" level=info msg="TearDown network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\" successfully" Apr 21 10:10:58.444263 containerd[1641]: time="2026-04-21T10:10:58.443898047Z" level=info msg="StopPodSandbox for \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\" returns successfully" Apr 21 10:10:58.444410 containerd[1641]: time="2026-04-21T10:10:58.444388112Z" level=info msg="RemovePodSandbox for \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\"" Apr 21 10:10:58.444453 containerd[1641]: time="2026-04-21T10:10:58.444412459Z" level=info msg="Forcibly stopping sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\"" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.471 [WARNING][5480] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1428411e-a64c-4ee4-8cf2-81f0993ef792", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"5cf5878bb9ddccd90b53a02b4798063e517541a88d9394ba60f51b4f815a0b5a", Pod:"coredns-674b8bbfcf-sdrhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali355f1e45863", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.471 [INFO][5480] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.471 [INFO][5480] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" iface="eth0" netns="" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.471 [INFO][5480] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.471 [INFO][5480] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.487 [INFO][5487] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.488 [INFO][5487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.488 [INFO][5487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.492 [WARNING][5487] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.492 [INFO][5487] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" HandleID="k8s-pod-network.73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Workload="ci--4081--3--7--e--553501566f-k8s-coredns--674b8bbfcf--sdrhd-eth0" Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.493 [INFO][5487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.496915 containerd[1641]: 2026-04-21 10:10:58.494 [INFO][5480] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03" Apr 21 10:10:58.496915 containerd[1641]: time="2026-04-21T10:10:58.496653822Z" level=info msg="TearDown network for sandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\" successfully" Apr 21 10:10:58.500784 containerd[1641]: time="2026-04-21T10:10:58.500669927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:58.500784 containerd[1641]: time="2026-04-21T10:10:58.500712901Z" level=info msg="RemovePodSandbox \"73cc540a599465dd1918a686b449631253de0f5d0926d791e7ceed7b17bcfd03\" returns successfully" Apr 21 10:10:58.501282 containerd[1641]: time="2026-04-21T10:10:58.501268204Z" level=info msg="StopPodSandbox for \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\"" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.524 [WARNING][5501] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.524 [INFO][5501] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.524 [INFO][5501] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" iface="eth0" netns="" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.525 [INFO][5501] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.525 [INFO][5501] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.541 [INFO][5509] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.542 [INFO][5509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.542 [INFO][5509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.548 [WARNING][5509] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.548 [INFO][5509] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.549 [INFO][5509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.555486 containerd[1641]: 2026-04-21 10:10:58.552 [INFO][5501] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.555486 containerd[1641]: time="2026-04-21T10:10:58.555039162Z" level=info msg="TearDown network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\" successfully" Apr 21 10:10:58.555486 containerd[1641]: time="2026-04-21T10:10:58.555066022Z" level=info msg="StopPodSandbox for \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\" returns successfully" Apr 21 10:10:58.555907 containerd[1641]: time="2026-04-21T10:10:58.555595477Z" level=info msg="RemovePodSandbox for \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\"" Apr 21 10:10:58.555907 containerd[1641]: time="2026-04-21T10:10:58.555613915Z" level=info msg="Forcibly stopping sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\"" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.593 [WARNING][5523] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" WorkloadEndpoint="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.593 [INFO][5523] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.593 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" iface="eth0" netns="" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.593 [INFO][5523] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.593 [INFO][5523] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.609 [INFO][5530] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.609 [INFO][5530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.609 [INFO][5530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.614 [WARNING][5530] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.614 [INFO][5530] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" HandleID="k8s-pod-network.90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Workload="ci--4081--3--7--e--553501566f-k8s-whisker--554757894f--j76q7-eth0" Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.616 [INFO][5530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.619349 containerd[1641]: 2026-04-21 10:10:58.617 [INFO][5523] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a" Apr 21 10:10:58.619654 containerd[1641]: time="2026-04-21T10:10:58.619383653Z" level=info msg="TearDown network for sandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\" successfully" Apr 21 10:10:58.624965 containerd[1641]: time="2026-04-21T10:10:58.624937968Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:58.625027 containerd[1641]: time="2026-04-21T10:10:58.624985509Z" level=info msg="RemovePodSandbox \"90cd8ac50461ed7f17fd8a4f2e04febf2b5d678ca07be1a90d2f44d438cd744a\" returns successfully" Apr 21 10:10:58.625609 containerd[1641]: time="2026-04-21T10:10:58.625388233Z" level=info msg="StopPodSandbox for \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\"" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.652 [WARNING][5545] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"062b1fa6-3260-4a3c-bfe5-a369e62e02c2", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820", Pod:"calico-apiserver-7f8dccf684-wzcc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6439df04bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.652 [INFO][5545] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.652 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" iface="eth0" netns="" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.652 [INFO][5545] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.652 [INFO][5545] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.666 [INFO][5552] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.666 [INFO][5552] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.666 [INFO][5552] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.671 [WARNING][5552] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.671 [INFO][5552] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.672 [INFO][5552] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.675482 containerd[1641]: 2026-04-21 10:10:58.673 [INFO][5545] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.675804 containerd[1641]: time="2026-04-21T10:10:58.675515042Z" level=info msg="TearDown network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\" successfully" Apr 21 10:10:58.675804 containerd[1641]: time="2026-04-21T10:10:58.675535723Z" level=info msg="StopPodSandbox for \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\" returns successfully" Apr 21 10:10:58.676198 containerd[1641]: time="2026-04-21T10:10:58.675967621Z" level=info msg="RemovePodSandbox for \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\"" Apr 21 10:10:58.676198 containerd[1641]: time="2026-04-21T10:10:58.675989825Z" level=info msg="Forcibly stopping sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\"" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.713 [WARNING][5566] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"062b1fa6-3260-4a3c-bfe5-a369e62e02c2", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"de46459a80f0f145613ea6298dcade2a639479f076632814645b58a72b66b820", Pod:"calico-apiserver-7f8dccf684-wzcc5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali6439df04bd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.713 [INFO][5566] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.713 [INFO][5566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" iface="eth0" netns="" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.713 [INFO][5566] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.713 [INFO][5566] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.731 [INFO][5584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.731 [INFO][5584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.731 [INFO][5584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.735 [WARNING][5584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.735 [INFO][5584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" HandleID="k8s-pod-network.c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--wzcc5-eth0" Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.737 [INFO][5584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.741028 containerd[1641]: 2026-04-21 10:10:58.739 [INFO][5566] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87" Apr 21 10:10:58.741386 containerd[1641]: time="2026-04-21T10:10:58.741111713Z" level=info msg="TearDown network for sandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\" successfully" Apr 21 10:10:58.745095 containerd[1641]: time="2026-04-21T10:10:58.745067428Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:58.745146 containerd[1641]: time="2026-04-21T10:10:58.745126727Z" level=info msg="RemovePodSandbox \"c538452b4fc87e79b528f6c7a90bd0a12d76862d7d02e83f23666e8a188d9a87\" returns successfully" Apr 21 10:10:58.745583 containerd[1641]: time="2026-04-21T10:10:58.745511525Z" level=info msg="StopPodSandbox for \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\"" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.773 [WARNING][5599] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"badb72fe-d615-4587-8706-aaaa53adf7ae", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc", Pod:"goldmane-5b85766d88-pwqjb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied79d9aa75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.773 [INFO][5599] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.773 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" iface="eth0" netns="" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.773 [INFO][5599] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.773 [INFO][5599] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.788 [INFO][5606] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.789 [INFO][5606] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.789 [INFO][5606] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.794 [WARNING][5606] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.794 [INFO][5606] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.795 [INFO][5606] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.799269 containerd[1641]: 2026-04-21 10:10:58.797 [INFO][5599] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.799269 containerd[1641]: time="2026-04-21T10:10:58.799080619Z" level=info msg="TearDown network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\" successfully" Apr 21 10:10:58.799269 containerd[1641]: time="2026-04-21T10:10:58.799103052Z" level=info msg="StopPodSandbox for \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\" returns successfully" Apr 21 10:10:58.801827 containerd[1641]: time="2026-04-21T10:10:58.801298506Z" level=info msg="RemovePodSandbox for \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\"" Apr 21 10:10:58.801827 containerd[1641]: time="2026-04-21T10:10:58.801319358Z" level=info msg="Forcibly stopping sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\"" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.825 [WARNING][5620] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"badb72fe-d615-4587-8706-aaaa53adf7ae", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"3ff61993a3c088e216da776521639b50a1d4c6d3b44612f6ea90b44e1df323dc", Pod:"goldmane-5b85766d88-pwqjb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.40.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calied79d9aa75c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.825 [INFO][5620] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.825 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" iface="eth0" netns="" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.825 [INFO][5620] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.825 [INFO][5620] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.841 [INFO][5627] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.842 [INFO][5627] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.842 [INFO][5627] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.847 [WARNING][5627] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.847 [INFO][5627] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" HandleID="k8s-pod-network.99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Workload="ci--4081--3--7--e--553501566f-k8s-goldmane--5b85766d88--pwqjb-eth0" Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.848 [INFO][5627] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.852863 containerd[1641]: 2026-04-21 10:10:58.850 [INFO][5620] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06" Apr 21 10:10:58.853208 containerd[1641]: time="2026-04-21T10:10:58.852925777Z" level=info msg="TearDown network for sandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\" successfully" Apr 21 10:10:58.856896 containerd[1641]: time="2026-04-21T10:10:58.856868924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:58.857015 containerd[1641]: time="2026-04-21T10:10:58.856943175Z" level=info msg="RemovePodSandbox \"99f5d550b89fd19de2f8f730031c4ddd5cbe9a703a477f843a96834e51fcad06\" returns successfully" Apr 21 10:10:58.857571 containerd[1641]: time="2026-04-21T10:10:58.857372820Z" level=info msg="StopPodSandbox for \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\"" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.886 [WARNING][5642] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"a97df2fa-3c38-4d58-b70b-3760e19c0ab6", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131", Pod:"calico-apiserver-7f8dccf684-ft4jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib74f08265d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.886 [INFO][5642] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.886 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" iface="eth0" netns="" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.886 [INFO][5642] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.886 [INFO][5642] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.901 [INFO][5649] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.901 [INFO][5649] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.901 [INFO][5649] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.906 [WARNING][5649] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.906 [INFO][5649] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.907 [INFO][5649] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.911632 containerd[1641]: 2026-04-21 10:10:58.909 [INFO][5642] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.912717 containerd[1641]: time="2026-04-21T10:10:58.911660582Z" level=info msg="TearDown network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\" successfully" Apr 21 10:10:58.912717 containerd[1641]: time="2026-04-21T10:10:58.911680813Z" level=info msg="StopPodSandbox for \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\" returns successfully" Apr 21 10:10:58.912717 containerd[1641]: time="2026-04-21T10:10:58.912229476Z" level=info msg="RemovePodSandbox for \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\"" Apr 21 10:10:58.912717 containerd[1641]: time="2026-04-21T10:10:58.912250567Z" level=info msg="Forcibly stopping sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\"" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.938 [WARNING][5664] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0", GenerateName:"calico-apiserver-7f8dccf684-", Namespace:"calico-system", SelfLink:"", UID:"a97df2fa-3c38-4d58-b70b-3760e19c0ab6", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f8dccf684", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"0c61961c7fca323c95df546d85f2529d0035059e0dc1407fef9c4000646bf131", Pod:"calico-apiserver-7f8dccf684-ft4jt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib74f08265d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.938 [INFO][5664] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.938 [INFO][5664] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" iface="eth0" netns="" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.938 [INFO][5664] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.938 [INFO][5664] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.958 [INFO][5671] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.958 [INFO][5671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.958 [INFO][5671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.963 [WARNING][5671] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.963 [INFO][5671] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" HandleID="k8s-pod-network.e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Workload="ci--4081--3--7--e--553501566f-k8s-calico--apiserver--7f8dccf684--ft4jt-eth0" Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.964 [INFO][5671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:58.968102 containerd[1641]: 2026-04-21 10:10:58.966 [INFO][5664] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46" Apr 21 10:10:58.968472 containerd[1641]: time="2026-04-21T10:10:58.968134555Z" level=info msg="TearDown network for sandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\" successfully" Apr 21 10:10:58.971887 containerd[1641]: time="2026-04-21T10:10:58.971817051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:58.971943 containerd[1641]: time="2026-04-21T10:10:58.971908218Z" level=info msg="RemovePodSandbox \"e4da45900b548c495ed0a3a9471f5f2a51a9ef96570fc6dfb475691dfd041d46\" returns successfully" Apr 21 10:10:58.972445 containerd[1641]: time="2026-04-21T10:10:58.972411663Z" level=info msg="StopPodSandbox for \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\"" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:58.996 [WARNING][5686] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0", GenerateName:"calico-kube-controllers-7f669854f8-", Namespace:"calico-system", SelfLink:"", UID:"62a145a7-f01c-43dc-b01f-ede287e53eec", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f669854f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019", Pod:"calico-kube-controllers-7f669854f8-7l8ws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c8df874f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:58.996 [INFO][5686] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:58.996 [INFO][5686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" iface="eth0" netns="" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:58.996 [INFO][5686] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:58.996 [INFO][5686] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.012 [INFO][5693] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.012 [INFO][5693] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.012 [INFO][5693] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.016 [WARNING][5693] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.017 [INFO][5693] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.018 [INFO][5693] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:59.021653 containerd[1641]: 2026-04-21 10:10:59.019 [INFO][5686] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.022067 containerd[1641]: time="2026-04-21T10:10:59.021685573Z" level=info msg="TearDown network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\" successfully" Apr 21 10:10:59.022067 containerd[1641]: time="2026-04-21T10:10:59.021706464Z" level=info msg="StopPodSandbox for \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\" returns successfully" Apr 21 10:10:59.022098 containerd[1641]: time="2026-04-21T10:10:59.022076479Z" level=info msg="RemovePodSandbox for \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\"" Apr 21 10:10:59.022098 containerd[1641]: time="2026-04-21T10:10:59.022093855Z" level=info msg="Forcibly stopping sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\"" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.047 [WARNING][5707] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0", GenerateName:"calico-kube-controllers-7f669854f8-", Namespace:"calico-system", SelfLink:"", UID:"62a145a7-f01c-43dc-b01f-ede287e53eec", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f669854f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-e-553501566f", ContainerID:"9b849d2183c8f63c323a1d57b91086d116e5bf200e4f682fe3c402b932401019", Pod:"calico-kube-controllers-7f669854f8-7l8ws", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali78c8df874f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.047 [INFO][5707] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.047 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" iface="eth0" netns="" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.047 [INFO][5707] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.047 [INFO][5707] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.062 [INFO][5715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.062 [INFO][5715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.063 [INFO][5715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.067 [WARNING][5715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.067 [INFO][5715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" HandleID="k8s-pod-network.6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Workload="ci--4081--3--7--e--553501566f-k8s-calico--kube--controllers--7f669854f8--7l8ws-eth0" Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.068 [INFO][5715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:59.072380 containerd[1641]: 2026-04-21 10:10:59.070 [INFO][5707] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf" Apr 21 10:10:59.072380 containerd[1641]: time="2026-04-21T10:10:59.072299061Z" level=info msg="TearDown network for sandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\" successfully" Apr 21 10:10:59.076639 containerd[1641]: time="2026-04-21T10:10:59.076614817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:10:59.076696 containerd[1641]: time="2026-04-21T10:10:59.076661006Z" level=info msg="RemovePodSandbox \"6521471b59e82d52ab569da094fdf0002bba7419b0cc2a6307eb77866d0a7ddf\" returns successfully" Apr 21 10:11:06.724493 kubelet[2751]: I0421 10:11:06.724203 2751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:11:13.131144 systemd[1]: Started sshd@7-37.27.217.213:22-50.85.169.122:48632.service - OpenSSH per-connection server daemon (50.85.169.122:48632). Apr 21 10:11:13.369605 sshd[5758]: Accepted publickey for core from 50.85.169.122 port 48632 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:13.373561 sshd[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:13.384980 systemd-logind[1622]: New session 8 of user core. Apr 21 10:11:13.392385 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:11:13.663853 sshd[5758]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:13.669553 systemd[1]: sshd@7-37.27.217.213:22-50.85.169.122:48632.service: Deactivated successfully. Apr 21 10:11:13.672880 systemd-logind[1622]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:11:13.673285 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:11:13.675574 systemd-logind[1622]: Removed session 8. Apr 21 10:11:18.701478 systemd[1]: Started sshd@8-37.27.217.213:22-50.85.169.122:48640.service - OpenSSH per-connection server daemon (50.85.169.122:48640). Apr 21 10:11:18.908525 sshd[5802]: Accepted publickey for core from 50.85.169.122 port 48640 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:18.909974 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:18.914889 systemd-logind[1622]: New session 9 of user core. Apr 21 10:11:18.920067 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:11:19.128575 sshd[5802]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:19.136160 systemd[1]: sshd@8-37.27.217.213:22-50.85.169.122:48640.service: Deactivated successfully. Apr 21 10:11:19.140487 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:11:19.141619 systemd-logind[1622]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:11:19.142677 systemd-logind[1622]: Removed session 9. Apr 21 10:11:24.168727 systemd[1]: Started sshd@9-37.27.217.213:22-50.85.169.122:58130.service - OpenSSH per-connection server daemon (50.85.169.122:58130). Apr 21 10:11:24.394570 sshd[5867]: Accepted publickey for core from 50.85.169.122 port 58130 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:24.398475 sshd[5867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:24.408425 systemd-logind[1622]: New session 10 of user core. Apr 21 10:11:24.413347 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:11:24.644153 sshd[5867]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:24.649252 systemd[1]: sshd@9-37.27.217.213:22-50.85.169.122:58130.service: Deactivated successfully. Apr 21 10:11:24.653759 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:11:24.654708 systemd-logind[1622]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:11:24.655731 systemd-logind[1622]: Removed session 10. Apr 21 10:11:29.686242 systemd[1]: Started sshd@10-37.27.217.213:22-50.85.169.122:54018.service - OpenSSH per-connection server daemon (50.85.169.122:54018). Apr 21 10:11:29.906682 sshd[5898]: Accepted publickey for core from 50.85.169.122 port 54018 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:29.909555 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:29.917980 systemd-logind[1622]: New session 11 of user core. Apr 21 10:11:29.923421 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:11:30.186952 sshd[5898]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:30.192503 systemd-logind[1622]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:11:30.194137 systemd[1]: sshd@10-37.27.217.213:22-50.85.169.122:54018.service: Deactivated successfully. Apr 21 10:11:30.199066 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:11:30.200504 systemd-logind[1622]: Removed session 11. Apr 21 10:11:30.225502 systemd[1]: Started sshd@11-37.27.217.213:22-50.85.169.122:54034.service - OpenSSH per-connection server daemon (50.85.169.122:54034). Apr 21 10:11:30.444968 sshd[5913]: Accepted publickey for core from 50.85.169.122 port 54034 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:30.448185 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:30.456938 systemd-logind[1622]: New session 12 of user core. Apr 21 10:11:30.461331 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:11:30.743339 sshd[5913]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:30.746201 systemd[1]: sshd@11-37.27.217.213:22-50.85.169.122:54034.service: Deactivated successfully. Apr 21 10:11:30.749643 systemd-logind[1622]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:11:30.750193 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:11:30.752044 systemd-logind[1622]: Removed session 12. Apr 21 10:11:30.777119 systemd[1]: Started sshd@12-37.27.217.213:22-50.85.169.122:54046.service - OpenSSH per-connection server daemon (50.85.169.122:54046). Apr 21 10:11:30.976658 sshd[5926]: Accepted publickey for core from 50.85.169.122 port 54046 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:30.979544 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:30.989824 systemd-logind[1622]: New session 13 of user core. Apr 21 10:11:30.994646 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:11:31.243096 sshd[5926]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:31.247127 systemd[1]: sshd@12-37.27.217.213:22-50.85.169.122:54046.service: Deactivated successfully. Apr 21 10:11:31.251364 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:11:31.252294 systemd-logind[1622]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:11:31.254872 systemd-logind[1622]: Removed session 13. Apr 21 10:11:36.281232 systemd[1]: Started sshd@13-37.27.217.213:22-50.85.169.122:54050.service - OpenSSH per-connection server daemon (50.85.169.122:54050). Apr 21 10:11:36.504295 sshd[5962]: Accepted publickey for core from 50.85.169.122 port 54050 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:36.505708 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:36.510169 systemd-logind[1622]: New session 14 of user core. Apr 21 10:11:36.514512 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:11:36.731731 sshd[5962]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:36.735868 systemd[1]: sshd@13-37.27.217.213:22-50.85.169.122:54050.service: Deactivated successfully. Apr 21 10:11:36.743011 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:11:36.744679 systemd-logind[1622]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:11:36.746722 systemd-logind[1622]: Removed session 14. Apr 21 10:11:41.769256 systemd[1]: Started sshd@14-37.27.217.213:22-50.85.169.122:59082.service - OpenSSH per-connection server daemon (50.85.169.122:59082). Apr 21 10:11:41.971241 sshd[5976]: Accepted publickey for core from 50.85.169.122 port 59082 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:41.974476 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:41.984182 systemd-logind[1622]: New session 15 of user core. Apr 21 10:11:41.991313 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:11:42.246556 sshd[5976]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:42.253927 systemd[1]: sshd@14-37.27.217.213:22-50.85.169.122:59082.service: Deactivated successfully. Apr 21 10:11:42.260246 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:11:42.260418 systemd-logind[1622]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:11:42.262779 systemd-logind[1622]: Removed session 15. Apr 21 10:11:42.283534 systemd[1]: Started sshd@15-37.27.217.213:22-50.85.169.122:59084.service - OpenSSH per-connection server daemon (50.85.169.122:59084). Apr 21 10:11:42.504920 sshd[5990]: Accepted publickey for core from 50.85.169.122 port 59084 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:42.507613 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:42.513860 systemd-logind[1622]: New session 16 of user core. Apr 21 10:11:42.522113 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:11:42.917681 sshd[5990]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:42.923956 systemd[1]: sshd@15-37.27.217.213:22-50.85.169.122:59084.service: Deactivated successfully. Apr 21 10:11:42.931000 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:11:42.934044 systemd-logind[1622]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:11:42.935947 systemd-logind[1622]: Removed session 16. Apr 21 10:11:42.954083 systemd[1]: Started sshd@16-37.27.217.213:22-50.85.169.122:59100.service - OpenSSH per-connection server daemon (50.85.169.122:59100). Apr 21 10:11:43.185501 sshd[6002]: Accepted publickey for core from 50.85.169.122 port 59100 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:43.188765 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:43.199575 systemd-logind[1622]: New session 17 of user core. Apr 21 10:11:43.204442 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:11:43.842962 sshd[6002]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:43.846546 systemd[1]: sshd@16-37.27.217.213:22-50.85.169.122:59100.service: Deactivated successfully. Apr 21 10:11:43.850448 systemd-logind[1622]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:11:43.851426 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:11:43.852578 systemd-logind[1622]: Removed session 17. Apr 21 10:11:43.880056 systemd[1]: Started sshd@17-37.27.217.213:22-50.85.169.122:59114.service - OpenSSH per-connection server daemon (50.85.169.122:59114). Apr 21 10:11:44.079807 sshd[6051]: Accepted publickey for core from 50.85.169.122 port 59114 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:44.080401 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:44.088081 systemd-logind[1622]: New session 18 of user core. Apr 21 10:11:44.093106 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:11:44.388152 sshd[6051]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:44.392853 systemd[1]: sshd@17-37.27.217.213:22-50.85.169.122:59114.service: Deactivated successfully. Apr 21 10:11:44.394281 systemd-logind[1622]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:11:44.397467 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:11:44.399107 systemd-logind[1622]: Removed session 18. Apr 21 10:11:44.423021 systemd[1]: Started sshd@18-37.27.217.213:22-50.85.169.122:59128.service - OpenSSH per-connection server daemon (50.85.169.122:59128). Apr 21 10:11:44.627791 sshd[6063]: Accepted publickey for core from 50.85.169.122 port 59128 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:44.630280 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:44.639936 systemd-logind[1622]: New session 19 of user core. Apr 21 10:11:44.647100 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:11:44.868126 sshd[6063]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:44.871544 systemd[1]: sshd@18-37.27.217.213:22-50.85.169.122:59128.service: Deactivated successfully. Apr 21 10:11:44.876176 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:11:44.876916 systemd-logind[1622]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:11:44.879217 systemd-logind[1622]: Removed session 19. Apr 21 10:11:49.908061 systemd[1]: Started sshd@19-37.27.217.213:22-50.85.169.122:40532.service - OpenSSH per-connection server daemon (50.85.169.122:40532). Apr 21 10:11:50.115910 sshd[6097]: Accepted publickey for core from 50.85.169.122 port 40532 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:50.117107 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:50.126451 systemd-logind[1622]: New session 20 of user core. Apr 21 10:11:50.133553 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:11:50.359196 sshd[6097]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:50.364098 systemd[1]: sshd@19-37.27.217.213:22-50.85.169.122:40532.service: Deactivated successfully. Apr 21 10:11:50.372397 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:11:50.374616 systemd-logind[1622]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:11:50.377215 systemd-logind[1622]: Removed session 20. Apr 21 10:11:55.398279 systemd[1]: Started sshd@20-37.27.217.213:22-50.85.169.122:40538.service - OpenSSH per-connection server daemon (50.85.169.122:40538). Apr 21 10:11:55.623065 sshd[6131]: Accepted publickey for core from 50.85.169.122 port 40538 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:55.625945 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:55.635357 systemd-logind[1622]: New session 21 of user core. Apr 21 10:11:55.643677 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:11:55.868253 sshd[6131]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:55.874467 systemd[1]: sshd@20-37.27.217.213:22-50.85.169.122:40538.service: Deactivated successfully. Apr 21 10:11:55.880966 systemd-logind[1622]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:11:55.881751 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:11:55.883248 systemd-logind[1622]: Removed session 21. Apr 21 10:12:14.123374 containerd[1641]: time="2026-04-21T10:12:14.123270768Z" level=info msg="shim disconnected" id=ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc namespace=k8s.io Apr 21 10:12:14.126033 containerd[1641]: time="2026-04-21T10:12:14.123444618Z" level=warning msg="cleaning up after shim disconnected" id=ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc namespace=k8s.io Apr 21 10:12:14.126033 containerd[1641]: time="2026-04-21T10:12:14.123463737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:14.131144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc-rootfs.mount: Deactivated successfully. Apr 21 10:12:14.163942 kubelet[2751]: E0421 10:12:14.163912 2751 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45432->10.0.0.2:2379: read: connection timed out" Apr 21 10:12:14.670878 containerd[1641]: time="2026-04-21T10:12:14.668227996Z" level=info msg="shim disconnected" id=846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214 namespace=k8s.io Apr 21 10:12:14.670878 containerd[1641]: time="2026-04-21T10:12:14.668290680Z" level=warning msg="cleaning up after shim disconnected" id=846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214 namespace=k8s.io Apr 21 10:12:14.670878 containerd[1641]: time="2026-04-21T10:12:14.668303850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:14.675997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214-rootfs.mount: Deactivated successfully. Apr 21 10:12:14.735041 kubelet[2751]: I0421 10:12:14.734975 2751 scope.go:117] "RemoveContainer" containerID="ef41b394d4990ec627fc3025e30342d5bed1b64ac96fac273a46f996b44d41cc" Apr 21 10:12:14.739026 containerd[1641]: time="2026-04-21T10:12:14.738815079Z" level=info msg="CreateContainer within sandbox \"097aa3f78eae2dabc5cad8c6d46051aeb86374a1ef40fe53f4e41068cd6220bd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:12:14.740048 kubelet[2751]: I0421 10:12:14.740020 2751 scope.go:117] "RemoveContainer" containerID="846256436adbe5cf89a891d024c351d871f2f01449ad45a4ec237ab39a6db214" Apr 21 10:12:14.742707 containerd[1641]: time="2026-04-21T10:12:14.742639877Z" level=info msg="CreateContainer within sandbox \"585657cebccea574a7b0ab5ecfd18a22cc9dc8518c42f03983d6e5a55505b330\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:12:14.776524 containerd[1641]: time="2026-04-21T10:12:14.773706674Z" level=info msg="CreateContainer within sandbox \"097aa3f78eae2dabc5cad8c6d46051aeb86374a1ef40fe53f4e41068cd6220bd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a2463c15b61b414ff3abcc4375c6cdf3c648acf8bc3a8741efb24013f370352d\"" Apr 21 10:12:14.776524 containerd[1641]: time="2026-04-21T10:12:14.775021660Z" level=info msg="StartContainer for \"a2463c15b61b414ff3abcc4375c6cdf3c648acf8bc3a8741efb24013f370352d\"" Apr 21 10:12:14.778152 containerd[1641]: time="2026-04-21T10:12:14.778135475Z" level=info msg="CreateContainer within sandbox \"585657cebccea574a7b0ab5ecfd18a22cc9dc8518c42f03983d6e5a55505b330\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d63ecb6a256de2ad1c8cd629e9e62c0b669139ecc836cdd6406718f07af75c02\"" Apr 21 10:12:14.779113 containerd[1641]: time="2026-04-21T10:12:14.779087429Z" level=info msg="StartContainer for \"d63ecb6a256de2ad1c8cd629e9e62c0b669139ecc836cdd6406718f07af75c02\"" Apr 21 10:12:14.840298 containerd[1641]: time="2026-04-21T10:12:14.840263682Z" level=info msg="StartContainer for \"d63ecb6a256de2ad1c8cd629e9e62c0b669139ecc836cdd6406718f07af75c02\" returns successfully" Apr 21 10:12:14.855530 containerd[1641]: time="2026-04-21T10:12:14.855488664Z" level=info msg="StartContainer for \"a2463c15b61b414ff3abcc4375c6cdf3c648acf8bc3a8741efb24013f370352d\" returns successfully" Apr 21 10:12:18.196380 kubelet[2751]: E0421 10:12:18.194499 2751 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45262->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-e-553501566f.18a8578fb71703cf kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-e-553501566f,UID:4aff57d252410a2aa90fa2327a6c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-e-553501566f,},FirstTimestamp:2026-04-21 10:12:07.761339343 +0000 UTC m=+129.609437231,LastTimestamp:2026-04-21 10:12:07.761339343 +0000 UTC m=+129.609437231,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-e-553501566f,}" Apr 21 10:12:19.215925 containerd[1641]: time="2026-04-21T10:12:19.214354813Z" level=info msg="shim disconnected" id=e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4 namespace=k8s.io Apr 21 10:12:19.215925 containerd[1641]: time="2026-04-21T10:12:19.214414251Z" level=warning msg="cleaning up after shim disconnected" id=e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4 namespace=k8s.io Apr 21 10:12:19.215925 containerd[1641]: time="2026-04-21T10:12:19.214427811Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:19.227478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4-rootfs.mount: Deactivated successfully. Apr 21 10:12:19.759728 kubelet[2751]: I0421 10:12:19.759654 2751 scope.go:117] "RemoveContainer" containerID="e0800efc9a89825e779f811076ebd2758543322c103522141ec3bb2675037ee4" Apr 21 10:12:19.762974 containerd[1641]: time="2026-04-21T10:12:19.762903251Z" level=info msg="CreateContainer within sandbox \"96577a093155c649a94d8da751b7256694583affd1da281c2d2bbd3634c1bae8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:12:19.781999 containerd[1641]: time="2026-04-21T10:12:19.780726003Z" level=info msg="CreateContainer within sandbox \"96577a093155c649a94d8da751b7256694583affd1da281c2d2bbd3634c1bae8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"4990bafdea0e240c1aad0057f215c541eb21734f55be3e74b3645abe8addda6e\"" Apr 21 10:12:19.781999 containerd[1641]: time="2026-04-21T10:12:19.781388594Z" level=info msg="StartContainer for \"4990bafdea0e240c1aad0057f215c541eb21734f55be3e74b3645abe8addda6e\"" Apr 21 10:12:19.863552 containerd[1641]: time="2026-04-21T10:12:19.863517493Z" level=info msg="StartContainer for \"4990bafdea0e240c1aad0057f215c541eb21734f55be3e74b3645abe8addda6e\" returns successfully"