Apr 21 10:09:25.944483 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:09:25.944500 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:09:25.944509 kernel: BIOS-provided physical RAM map: Apr 21 10:09:25.944514 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:09:25.944518 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Apr 21 10:09:25.944523 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Apr 21 10:09:25.944528 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Apr 21 10:09:25.944532 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007f9ecfff] reserved Apr 21 10:09:25.944537 kernel: BIOS-e820: [mem 0x000000007f9ed000-0x000000007faecfff] type 20 Apr 21 10:09:25.944541 kernel: BIOS-e820: [mem 0x000000007faed000-0x000000007fb6cfff] reserved Apr 21 10:09:25.944546 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Apr 21 10:09:25.944553 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Apr 21 10:09:25.944576 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Apr 21 10:09:25.944581 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Apr 21 10:09:25.944586 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Apr 21 10:09:25.944591 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 10:09:25.944599 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Apr 21 10:09:25.944603 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Apr 21 10:09:25.944608 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 21 10:09:25.944612 kernel: NX (Execute Disable) protection: active Apr 21 10:09:25.944617 kernel: APIC: Static calls initialized Apr 21 10:09:25.944622 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 21 10:09:25.944626 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 Apr 21 10:09:25.944631 kernel: efi: Remove mem135: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Apr 21 10:09:25.944636 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Apr 21 10:09:25.944641 kernel: SMBIOS 3.0.0 present. Apr 21 10:09:25.944645 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Apr 21 10:09:25.944650 kernel: Hypervisor detected: KVM Apr 21 10:09:25.944657 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:09:25.944662 kernel: kvm-clock: using sched offset of 12792342292 cycles Apr 21 10:09:25.944667 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:09:25.944672 kernel: tsc: Detected 2396.398 MHz processor Apr 21 10:09:25.944677 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:09:25.944682 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:09:25.944687 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Apr 21 10:09:25.944692 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:09:25.944697 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:09:25.944704 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Apr 21 10:09:25.944709 kernel: Using GB pages for direct mapping Apr 21 10:09:25.944714 kernel: Secure boot disabled Apr 21 10:09:25.944721 kernel: ACPI: Early table checksum verification disabled Apr 21 10:09:25.944727 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Apr 21 10:09:25.944732 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Apr 21 10:09:25.944736 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:25.944744 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:25.944749 kernel: ACPI: FACS 0x000000007FBDD000 000040 Apr 21 10:09:25.944754 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:25.944759 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:25.944764 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:25.944769 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 10:09:25.944774 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Apr 21 10:09:25.944781 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Apr 21 10:09:25.944786 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Apr 21 10:09:25.944791 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Apr 21 10:09:25.944796 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Apr 21 10:09:25.944801 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Apr 21 10:09:25.944807 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Apr 21 10:09:25.944812 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Apr 21 10:09:25.944817 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Apr 21 10:09:25.944822 kernel: No NUMA configuration found Apr 21 10:09:25.944830 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Apr 21 10:09:25.944835 kernel: NODE_DATA(0) allocated [mem 0x179ffa000-0x179ffffff] Apr 21 10:09:25.944840 kernel: Zone ranges: Apr 21 10:09:25.944845 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:09:25.944850 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 21 10:09:25.944855 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:09:25.944860 kernel: Movable zone start for each node Apr 21 10:09:25.944864 kernel: Early memory node ranges Apr 21 10:09:25.944869 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:09:25.944874 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Apr 21 10:09:25.944882 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Apr 21 10:09:25.944887 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Apr 21 10:09:25.944892 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Apr 21 10:09:25.944897 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Apr 21 10:09:25.944902 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:09:25.944907 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:09:25.944912 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Apr 21 10:09:25.944917 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 21 10:09:25.944922 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Apr 21 10:09:25.944929 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 21 10:09:25.944934 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 10:09:25.944939 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:09:25.944944 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 10:09:25.944949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 10:09:25.944954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:09:25.944959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:09:25.944964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:09:25.944969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:09:25.944976 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:09:25.944981 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:09:25.944986 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:09:25.944991 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:09:25.944996 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 21 10:09:25.945001 kernel: Booting paravirtualized kernel on KVM Apr 21 10:09:25.945006 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:09:25.945011 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:09:25.945016 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:09:25.945023 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:09:25.945028 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:09:25.945033 kernel: kvm-guest: PV spinlocks disabled, no host support Apr 21 10:09:25.945039 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:09:25.945044 kernel: random: crng init done Apr 21 10:09:25.945049 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 10:09:25.945054 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:09:25.945059 kernel: Fallback order for Node 0: 0 Apr 21 10:09:25.945066 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1004632 Apr 21 10:09:25.945071 kernel: Policy zone: Normal Apr 21 10:09:25.945076 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:09:25.945081 kernel: software IO TLB: area num 2. Apr 21 10:09:25.945086 kernel: Memory: 3827836K/4091168K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 263128K reserved, 0K cma-reserved) Apr 21 10:09:25.945091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:09:25.945096 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:09:25.945101 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:09:25.945106 kernel: Dynamic Preempt: voluntary Apr 21 10:09:25.945113 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:09:25.945119 kernel: rcu: RCU event tracing is enabled. Apr 21 10:09:25.945125 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:09:25.945130 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:09:25.945142 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:09:25.945150 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:09:25.945155 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:09:25.945160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:09:25.945165 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:09:25.945170 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:09:25.945176 kernel: Console: colour dummy device 80x25 Apr 21 10:09:25.945181 kernel: printk: console [tty0] enabled Apr 21 10:09:25.945188 kernel: printk: console [ttyS0] enabled Apr 21 10:09:25.945194 kernel: ACPI: Core revision 20230628 Apr 21 10:09:25.945206 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 10:09:25.945211 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:09:25.945216 kernel: x2apic enabled Apr 21 10:09:25.945224 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:09:25.945229 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 10:09:25.945234 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Apr 21 10:09:25.945239 kernel: Calibrating delay loop (skipped) preset value.. 4792.79 BogoMIPS (lpj=2396398) Apr 21 10:09:25.945245 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 10:09:25.945250 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 21 10:09:25.945255 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 21 10:09:25.945260 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:09:25.945266 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Apr 21 10:09:25.945273 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 21 10:09:25.945278 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 21 10:09:25.945288 kernel: active return thunk: srso_alias_return_thunk Apr 21 10:09:25.945293 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Apr 21 10:09:25.945298 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 21 10:09:25.945303 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:09:25.945309 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:09:25.945314 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:09:25.945319 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:09:25.945327 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:09:25.945332 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:09:25.945337 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:09:25.945343 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:09:25.945348 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:09:25.945353 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 10:09:25.945358 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 10:09:25.945363 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 10:09:25.945369 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Apr 21 10:09:25.945376 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Apr 21 10:09:25.945382 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:09:25.945387 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:09:25.945392 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:09:25.945397 kernel: landlock: Up and running. Apr 21 10:09:25.945402 kernel: SELinux: Initializing. Apr 21 10:09:25.945408 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:09:25.945413 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 10:09:25.945418 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Apr 21 10:09:25.945426 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:09:25.945431 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:09:25.945436 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:09:25.945441 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 21 10:09:25.945447 kernel: ... version: 0 Apr 21 10:09:25.945452 kernel: ... bit width: 48 Apr 21 10:09:25.945457 kernel: ... generic registers: 6 Apr 21 10:09:25.945462 kernel: ... value mask: 0000ffffffffffff Apr 21 10:09:25.945470 kernel: ... max period: 00007fffffffffff Apr 21 10:09:25.945475 kernel: ... fixed-purpose events: 0 Apr 21 10:09:25.945480 kernel: ... event mask: 000000000000003f Apr 21 10:09:25.945485 kernel: signal: max sigframe size: 3376 Apr 21 10:09:25.945490 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:09:25.945496 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:09:25.945501 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:09:25.945506 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:09:25.945511 kernel: .... node #0, CPUs: #1 Apr 21 10:09:25.945516 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:09:25.945524 kernel: smpboot: Max logical packages: 1 Apr 21 10:09:25.945530 kernel: smpboot: Total of 2 processors activated (9585.59 BogoMIPS) Apr 21 10:09:25.945535 kernel: devtmpfs: initialized Apr 21 10:09:25.945540 kernel: x86/mm: Memory block size: 128MB Apr 21 10:09:25.945545 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Apr 21 10:09:25.945551 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:09:25.945556 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:09:25.945575 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:09:25.945581 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:09:25.945588 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:09:25.945594 kernel: audit: type=2000 audit(1776766164.971:1): state=initialized audit_enabled=0 res=1 Apr 21 10:09:25.945599 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:09:25.945604 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:09:25.945609 kernel: cpuidle: using governor menu Apr 21 10:09:25.945614 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:09:25.945619 kernel: dca service started, version 1.12.1 Apr 21 10:09:25.945625 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Apr 21 10:09:25.945630 kernel: PCI: Using configuration type 1 for base access Apr 21 10:09:25.945638 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:09:25.945643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:09:25.945648 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:09:25.945653 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:09:25.945659 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:09:25.945664 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:09:25.945669 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:09:25.945674 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:09:25.945679 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 10:09:25.945687 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:09:25.945692 kernel: ACPI: Interpreter enabled Apr 21 10:09:25.945698 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:09:25.945703 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:09:25.945708 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:09:25.945713 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:09:25.945718 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 10:09:25.945723 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:09:25.945876 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:09:25.945987 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 10:09:25.946085 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 10:09:25.946092 kernel: PCI host bridge to bus 0000:00 Apr 21 10:09:25.946192 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:09:25.946292 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:09:25.946380 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:09:25.946470 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Apr 21 10:09:25.946568 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Apr 21 10:09:25.946673 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:09:25.946762 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:09:25.946872 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 21 10:09:25.946982 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Apr 21 10:09:25.947083 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80000000-0x807fffff pref] Apr 21 10:09:25.947179 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc060500000-0xc060503fff 64bit pref] Apr 21 10:09:25.947321 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8138a000-0x8138afff] Apr 21 10:09:25.947429 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:09:25.947530 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Apr 21 10:09:25.947644 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:09:25.947752 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.947855 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x81389000-0x81389fff] Apr 21 10:09:25.947962 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.948059 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x81388000-0x81388fff] Apr 21 10:09:25.948163 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.948269 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x81387000-0x81387fff] Apr 21 10:09:25.948372 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.948471 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x81386000-0x81386fff] Apr 21 10:09:25.948585 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.948684 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x81385000-0x81385fff] Apr 21 10:09:25.948792 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.948888 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x81384000-0x81384fff] Apr 21 10:09:25.948992 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.949093 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x81383000-0x81383fff] Apr 21 10:09:25.949210 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.949310 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x81382000-0x81382fff] Apr 21 10:09:25.949421 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 21 10:09:25.949522 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x81381000-0x81381fff] Apr 21 10:09:25.949668 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 21 10:09:25.949767 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 10:09:25.949873 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 21 10:09:25.949968 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x6040-0x605f] Apr 21 10:09:25.950087 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0x81380000-0x81380fff] Apr 21 10:09:25.950236 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 21 10:09:25.950370 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6000-0x603f] Apr 21 10:09:25.950520 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:09:25.951728 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x81200000-0x81200fff] Apr 21 10:09:25.951845 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xc060000000-0xc060003fff 64bit pref] Apr 21 10:09:25.951949 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:09:25.952047 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:09:25.952141 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:09:25.952250 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:09:25.952378 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 21 10:09:25.952485 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x81100000-0x81103fff 64bit] Apr 21 10:09:25.953634 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:09:25.953742 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:09:25.953851 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 21 10:09:25.953951 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x81000000-0x81000fff] Apr 21 10:09:25.954050 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xc060100000-0xc060103fff 64bit pref] Apr 21 10:09:25.954149 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:09:25.954256 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:09:25.954351 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:09:25.954460 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 21 10:09:25.954570 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xc060200000-0xc060203fff 64bit pref] Apr 21 10:09:25.954667 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:09:25.954762 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:09:25.954869 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 21 10:09:25.954973 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x80f00000-0x80f00fff] Apr 21 10:09:25.955074 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xc060300000-0xc060303fff 64bit pref] Apr 21 10:09:25.955170 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:09:25.955278 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:09:25.955374 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:09:25.955482 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 21 10:09:25.957648 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x80e00000-0x80e00fff] Apr 21 10:09:25.957771 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xc060400000-0xc060403fff 64bit pref] Apr 21 10:09:25.957871 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:09:25.957968 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:09:25.958062 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:09:25.958069 kernel: acpiphp: Slot [0] registered Apr 21 10:09:25.958177 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 21 10:09:25.958311 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x80c00000-0x80c00fff] Apr 21 10:09:25.958419 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xc000000000-0xc000003fff 64bit pref] Apr 21 10:09:25.958519 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 21 10:09:25.958629 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:09:25.958726 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:09:25.958826 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:09:25.958833 kernel: acpiphp: Slot [0-2] registered Apr 21 10:09:25.958928 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:09:25.959026 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:09:25.959129 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:09:25.959135 kernel: acpiphp: Slot [0-3] registered Apr 21 10:09:25.959240 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:09:25.959336 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:09:25.959444 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:09:25.959452 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:09:25.959458 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:09:25.959463 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:09:25.959468 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:09:25.959477 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 10:09:25.959483 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 10:09:25.959488 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 10:09:25.959493 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 10:09:25.959499 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 10:09:25.959504 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 10:09:25.959509 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 10:09:25.959514 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 10:09:25.959519 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 10:09:25.959527 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 10:09:25.959532 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 10:09:25.959538 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 10:09:25.959543 kernel: iommu: Default domain type: Translated Apr 21 10:09:25.959548 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:09:25.959553 kernel: efivars: Registered efivars operations Apr 21 10:09:25.961580 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:09:25.961589 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:09:25.961595 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Apr 21 10:09:25.961604 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Apr 21 10:09:25.961609 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Apr 21 10:09:25.961614 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Apr 21 10:09:25.961734 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 10:09:25.961836 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 10:09:25.961932 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:09:25.961938 kernel: vgaarb: loaded Apr 21 10:09:25.961944 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 10:09:25.961949 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 10:09:25.961958 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:09:25.961963 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:09:25.961969 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:09:25.961974 kernel: pnp: PnP ACPI init Apr 21 10:09:25.962079 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Apr 21 10:09:25.962087 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:09:25.962092 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:09:25.962098 kernel: NET: Registered PF_INET protocol family Apr 21 10:09:25.962119 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 10:09:25.962127 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 10:09:25.962132 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:09:25.962138 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:09:25.962143 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 10:09:25.962149 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 10:09:25.962154 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:09:25.962159 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 10:09:25.962167 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:09:25.962173 kernel: NET: Registered PF_XDP protocol family Apr 21 10:09:25.962286 kernel: pci 0000:01:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:09:25.962389 kernel: pci 0000:07:00.0: can't claim BAR 6 [mem 0xfff80000-0xffffffff pref]: no compatible bridge window Apr 21 10:09:25.962493 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 21 10:09:25.964648 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 21 10:09:25.964778 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 21 10:09:25.964894 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Apr 21 10:09:25.965023 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Apr 21 10:09:25.965146 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Apr 21 10:09:25.965283 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x81280000-0x812fffff pref] Apr 21 10:09:25.965403 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 21 10:09:25.965527 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Apr 21 10:09:25.965676 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:09:25.965818 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 21 10:09:25.965944 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Apr 21 10:09:25.966066 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 21 10:09:25.966186 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Apr 21 10:09:25.966321 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:09:25.966441 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 21 10:09:25.968542 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:09:25.968707 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 21 10:09:25.968831 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Apr 21 10:09:25.968955 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:09:25.969651 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 21 10:09:25.969774 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Apr 21 10:09:25.969891 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:09:25.970013 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x80c80000-0x80cfffff pref] Apr 21 10:09:25.970140 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 21 10:09:25.970270 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Apr 21 10:09:25.970384 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Apr 21 10:09:25.970497 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:09:25.970629 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 21 10:09:25.970745 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Apr 21 10:09:25.972712 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Apr 21 10:09:25.972832 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:09:25.972949 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 21 10:09:25.973069 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Apr 21 10:09:25.973184 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Apr 21 10:09:25.973309 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:09:25.973429 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:09:25.973535 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:09:25.975176 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:09:25.975303 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Apr 21 10:09:25.975411 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Apr 21 10:09:25.975518 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Apr 21 10:09:25.975667 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Apr 21 10:09:25.975786 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Apr 21 10:09:25.975915 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Apr 21 10:09:25.976050 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Apr 21 10:09:25.976170 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Apr 21 10:09:25.976310 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Apr 21 10:09:25.976415 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Apr 21 10:09:25.976508 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Apr 21 10:09:25.977693 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Apr 21 10:09:25.977805 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Apr 21 10:09:25.977916 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Apr 21 10:09:25.978042 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Apr 21 10:09:25.978175 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Apr 21 10:09:25.978313 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Apr 21 10:09:25.978410 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Apr 21 10:09:25.978509 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Apr 21 10:09:25.978624 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Apr 21 10:09:25.978717 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Apr 21 10:09:25.978809 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Apr 21 10:09:25.978817 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 10:09:25.978823 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:09:25.978829 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 21 10:09:25.978834 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Apr 21 10:09:25.978844 kernel: Initialise system trusted keyrings Apr 21 10:09:25.978849 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 10:09:25.978855 kernel: Key type asymmetric registered Apr 21 10:09:25.978861 kernel: Asymmetric key parser 'x509' registered Apr 21 10:09:25.978867 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:09:25.978872 kernel: io scheduler mq-deadline registered Apr 21 10:09:25.978878 kernel: io scheduler kyber registered Apr 21 10:09:25.978884 kernel: io scheduler bfq registered Apr 21 10:09:25.978989 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 21 10:09:25.979092 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 21 10:09:25.979192 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 21 10:09:25.979298 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 21 10:09:25.979394 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 21 10:09:25.979493 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 21 10:09:25.981350 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 21 10:09:25.981484 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 21 10:09:25.981629 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 21 10:09:25.981749 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 21 10:09:25.981875 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 21 10:09:25.981994 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 21 10:09:25.982109 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 21 10:09:25.982293 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 21 10:09:25.982408 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 21 10:09:25.982521 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 21 10:09:25.982531 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 10:09:25.982698 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Apr 21 10:09:25.982823 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Apr 21 10:09:25.982833 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:09:25.982841 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Apr 21 10:09:25.982849 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:09:25.982857 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:09:25.982866 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:09:25.982873 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:09:25.982881 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:09:25.982889 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:09:25.983017 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 21 10:09:25.983130 kernel: rtc_cmos 00:03: registered as rtc0 Apr 21 10:09:25.983252 kernel: rtc_cmos 00:03: setting system clock to 2026-04-21T10:09:25 UTC (1776766165) Apr 21 10:09:25.983364 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 10:09:25.983373 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 21 10:09:25.983382 kernel: efifb: probing for efifb Apr 21 10:09:25.983390 kernel: efifb: framebuffer at 0x80000000, using 4032k, total 4032k Apr 21 10:09:25.983401 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Apr 21 10:09:25.983409 kernel: efifb: scrolling: redraw Apr 21 10:09:25.983417 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:09:25.983425 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:09:25.983433 kernel: fb0: EFI VGA frame buffer device Apr 21 10:09:25.983441 kernel: pstore: Using crash dump compression: deflate Apr 21 10:09:25.983449 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:09:25.983457 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:09:25.983465 kernel: Segment Routing with IPv6 Apr 21 10:09:25.983476 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:09:25.983483 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:09:25.983491 kernel: Key type dns_resolver registered Apr 21 10:09:25.983499 kernel: IPI shorthand broadcast: enabled Apr 21 10:09:25.983507 kernel: sched_clock: Marking stable (1256013290, 226317599)->(1633178309, -150847420) Apr 21 10:09:25.983515 kernel: registered taskstats version 1 Apr 21 10:09:25.983522 kernel: Loading compiled-in X.509 certificates Apr 21 10:09:25.983530 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:09:25.983538 kernel: Key type .fscrypt registered Apr 21 10:09:25.983546 kernel: Key type fscrypt-provisioning registered Apr 21 10:09:25.985305 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:09:25.985322 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:09:25.985331 kernel: ima: No architecture policies found Apr 21 10:09:25.985339 kernel: clk: Disabling unused clocks Apr 21 10:09:25.985347 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:09:25.985355 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:09:25.985363 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:09:25.985371 kernel: Run /init as init process Apr 21 10:09:25.985382 kernel: with arguments: Apr 21 10:09:25.985390 kernel: /init Apr 21 10:09:25.985398 kernel: with environment: Apr 21 10:09:25.985408 kernel: HOME=/ Apr 21 10:09:25.985416 kernel: TERM=linux Apr 21 10:09:25.985426 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:09:25.985436 systemd[1]: Detected virtualization kvm. Apr 21 10:09:25.985448 systemd[1]: Detected architecture x86-64. Apr 21 10:09:25.985456 systemd[1]: Running in initrd. Apr 21 10:09:25.985464 systemd[1]: No hostname configured, using default hostname. Apr 21 10:09:25.985472 systemd[1]: Hostname set to . Apr 21 10:09:25.985480 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:09:25.985488 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:09:25.985497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:09:25.985505 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:09:25.985514 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:09:25.985525 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:09:25.985533 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:09:25.985542 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:09:25.985551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:09:25.985572 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:09:25.985591 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:09:25.985603 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:09:25.985611 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:09:25.985620 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:09:25.985628 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:09:25.985636 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:09:25.985644 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:09:25.985652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:09:25.985660 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:09:25.985668 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:09:25.985680 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:09:25.985688 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:09:25.985696 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:09:25.985704 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:09:25.985712 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:09:25.985721 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:09:25.985729 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:09:25.985737 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:09:25.985746 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:09:25.985756 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:09:25.985792 systemd-journald[188]: Collecting audit messages is disabled. Apr 21 10:09:25.985812 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:25.985823 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:09:25.985831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:09:25.985839 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:09:25.985848 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:25.985856 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:09:25.985868 systemd-journald[188]: Journal started Apr 21 10:09:25.985886 systemd-journald[188]: Runtime Journal (/run/log/journal/76e722b314294d6a9fca7cf84ec27d9f) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:09:25.990323 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:09:25.976667 systemd-modules-load[189]: Inserted module 'overlay' Apr 21 10:09:25.997393 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:09:26.001781 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:09:26.007103 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:09:26.018413 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:09:26.020651 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:09:26.023061 systemd-modules-load[189]: Inserted module 'br_netfilter' Apr 21 10:09:26.023636 kernel: Bridge firewalling registered Apr 21 10:09:26.028821 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:26.029962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:09:26.030743 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:09:26.037729 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:09:26.041671 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:09:26.042788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:09:26.050069 dracut-cmdline[218]: dracut-dracut-053 Apr 21 10:09:26.053516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:09:26.054431 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:09:26.061685 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:09:26.092259 systemd-resolved[238]: Positive Trust Anchors: Apr 21 10:09:26.092273 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:09:26.092295 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:09:26.095939 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 21 10:09:26.098021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:09:26.098479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:09:26.130587 kernel: SCSI subsystem initialized Apr 21 10:09:26.138581 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:09:26.147586 kernel: iscsi: registered transport (tcp) Apr 21 10:09:26.167249 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:09:26.167321 kernel: QLogic iSCSI HBA Driver Apr 21 10:09:26.214271 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:09:26.221722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:09:26.248591 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:09:26.248666 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:09:26.248681 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:09:26.293605 kernel: raid6: avx512x4 gen() 44351 MB/s Apr 21 10:09:26.311609 kernel: raid6: avx512x2 gen() 44380 MB/s Apr 21 10:09:26.329601 kernel: raid6: avx512x1 gen() 41216 MB/s Apr 21 10:09:26.347598 kernel: raid6: avx2x4 gen() 43979 MB/s Apr 21 10:09:26.365596 kernel: raid6: avx2x2 gen() 48306 MB/s Apr 21 10:09:26.384680 kernel: raid6: avx2x1 gen() 36316 MB/s Apr 21 10:09:26.384759 kernel: raid6: using algorithm avx2x2 gen() 48306 MB/s Apr 21 10:09:26.404724 kernel: raid6: .... xor() 35746 MB/s, rmw enabled Apr 21 10:09:26.404802 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:09:26.421648 kernel: xor: automatically using best checksumming function avx Apr 21 10:09:26.534604 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:09:26.547137 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:09:26.553744 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:09:26.564828 systemd-udevd[407]: Using default interface naming scheme 'v255'. Apr 21 10:09:26.568642 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:09:26.577771 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:09:26.590603 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Apr 21 10:09:26.623034 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:09:26.628716 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:09:26.700511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:09:26.706719 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:09:26.720246 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:09:26.724848 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:09:26.725543 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:09:26.726629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:09:26.733698 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:09:26.748132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:09:26.800940 kernel: ACPI: bus type USB registered Apr 21 10:09:26.800996 kernel: usbcore: registered new interface driver usbfs Apr 21 10:09:26.801006 kernel: usbcore: registered new interface driver hub Apr 21 10:09:26.803573 kernel: usbcore: registered new device driver usb Apr 21 10:09:26.814590 kernel: scsi host0: Virtio SCSI HBA Apr 21 10:09:26.818576 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:09:26.826790 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:09:26.827364 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:26.828147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:09:26.828836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:26.828921 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:26.832599 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 21 10:09:26.832245 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:26.840773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:26.852087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:26.852617 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:26.856822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:26.872909 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:26.880866 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:09:26.880915 kernel: AES CTR mode by8 optimization enabled Apr 21 10:09:26.885042 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:09:26.894901 kernel: libata version 3.00 loaded. Apr 21 10:09:26.901606 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 10:09:26.901808 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 10:09:26.908236 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 21 10:09:26.908465 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 10:09:26.913979 kernel: scsi host1: ahci Apr 21 10:09:26.915309 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:26.921090 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:09:26.921290 kernel: scsi host2: ahci Apr 21 10:09:26.923545 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 21 10:09:26.923711 kernel: scsi host3: ahci Apr 21 10:09:26.930575 kernel: scsi host4: ahci Apr 21 10:09:26.932579 kernel: scsi host5: ahci Apr 21 10:09:26.932615 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 21 10:09:26.938036 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 21 10:09:26.938236 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 21 10:09:26.942866 kernel: scsi host6: ahci Apr 21 10:09:26.942911 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Apr 21 10:09:26.943069 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 21 10:09:26.946934 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 21 10:09:26.947097 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 21 10:09:26.947229 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 21 10:09:26.947355 kernel: hub 1-0:1.0: USB hub found Apr 21 10:09:26.950881 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 Apr 21 10:09:26.950932 kernel: hub 1-0:1.0: 4 ports detected Apr 21 10:09:26.951112 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 Apr 21 10:09:26.951122 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 21 10:09:26.951154 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 Apr 21 10:09:26.953250 kernel: hub 2-0:1.0: USB hub found Apr 21 10:09:26.953415 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 Apr 21 10:09:26.953816 kernel: hub 2-0:1.0: 4 ports detected Apr 21 10:09:26.953967 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 Apr 21 10:09:26.966836 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 Apr 21 10:09:26.971620 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 21 10:09:26.976846 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:09:26.976877 kernel: GPT:17805311 != 160006143 Apr 21 10:09:26.979601 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:09:26.979611 kernel: GPT:17805311 != 160006143 Apr 21 10:09:26.980823 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:09:26.982815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:09:26.986589 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 21 10:09:27.194740 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 21 10:09:27.287581 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:27.287666 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 10:09:27.287678 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:27.291987 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 10:09:27.292021 kernel: ata1.00: applying bridge limits Apr 21 10:09:27.292579 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:27.294587 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:27.299589 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 10:09:27.299617 kernel: ata1.00: configured for UDMA/100 Apr 21 10:09:27.304592 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 10:09:27.344587 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 21 10:09:27.350614 kernel: usbcore: registered new interface driver usbhid Apr 21 10:09:27.350647 kernel: usbhid: USB HID core driver Apr 21 10:09:27.364431 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 10:09:27.364865 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 10:09:27.369595 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (460) Apr 21 10:09:27.379445 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 21 10:09:27.379485 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 21 10:09:27.385581 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (481) Apr 21 10:09:27.386710 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 21 10:09:27.388594 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Apr 21 10:09:27.392057 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 21 10:09:27.401339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:09:27.404857 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 21 10:09:27.405235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 21 10:09:27.412745 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:09:27.417086 disk-uuid[585]: Primary Header is updated. Apr 21 10:09:27.417086 disk-uuid[585]: Secondary Entries is updated. Apr 21 10:09:27.417086 disk-uuid[585]: Secondary Header is updated. Apr 21 10:09:28.433604 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 21 10:09:28.436494 disk-uuid[587]: The operation has completed successfully. Apr 21 10:09:28.501465 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:09:28.501611 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:09:28.506731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:09:28.509883 sh[598]: Success Apr 21 10:09:28.524041 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 21 10:09:28.578776 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:09:28.593678 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:09:28.594290 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:09:28.616605 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:09:28.616659 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:28.616669 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:09:28.621930 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:09:28.621960 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:09:28.632598 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:09:28.635329 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:09:28.636990 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:09:28.646760 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:09:28.650697 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:09:28.709178 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:28.709253 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:28.709266 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:09:28.721521 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:09:28.721591 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:09:28.737279 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:28.736962 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:09:28.746815 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:09:28.755812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:09:28.756428 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:09:28.759837 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:09:28.796333 systemd-networkd[780]: lo: Link UP Apr 21 10:09:28.796611 systemd-networkd[780]: lo: Gained carrier Apr 21 10:09:28.799830 systemd-networkd[780]: Enumeration completed Apr 21 10:09:28.800641 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:09:28.801702 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:28.801708 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:28.802414 systemd[1]: Reached target network.target - Network. Apr 21 10:09:28.805236 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:28.805242 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:28.805916 systemd-networkd[780]: eth0: Link UP Apr 21 10:09:28.805920 systemd-networkd[780]: eth0: Gained carrier Apr 21 10:09:28.805926 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:28.816014 systemd-networkd[780]: eth1: Link UP Apr 21 10:09:28.816018 systemd-networkd[780]: eth1: Gained carrier Apr 21 10:09:28.816029 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:28.828041 ignition[775]: Ignition 2.19.0 Apr 21 10:09:28.828058 ignition[775]: Stage: fetch-offline Apr 21 10:09:28.828110 ignition[775]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:28.828125 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:28.828268 ignition[775]: parsed url from cmdline: "" Apr 21 10:09:28.828275 ignition[775]: no config URL provided Apr 21 10:09:28.828283 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:09:28.832004 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:09:28.828296 ignition[775]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:09:28.828303 ignition[775]: failed to fetch config: resource requires networking Apr 21 10:09:28.829499 ignition[775]: Ignition finished successfully Apr 21 10:09:28.839690 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:09:28.841647 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:09:28.851286 ignition[787]: Ignition 2.19.0 Apr 21 10:09:28.851298 ignition[787]: Stage: fetch Apr 21 10:09:28.851489 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:28.851499 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:28.853770 ignition[787]: parsed url from cmdline: "" Apr 21 10:09:28.853806 ignition[787]: no config URL provided Apr 21 10:09:28.854172 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:09:28.854185 ignition[787]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:09:28.854222 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 21 10:09:28.854422 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 21 10:09:28.876615 systemd-networkd[780]: eth0: DHCPv4 address 46.62.167.141/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:09:29.055069 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 21 10:09:29.063401 ignition[787]: GET result: OK Apr 21 10:09:29.064641 ignition[787]: parsing config with SHA512: c4030f054d2489bb56f82b6b9366387e3e0c3a8001bbc6173888d2249d9fd9a04cad219bf931a95245ebf1f19ee9a351533f910a97248dec13611093e04c30ff Apr 21 10:09:29.068956 unknown[787]: fetched base config from "system" Apr 21 10:09:29.069338 ignition[787]: fetch: fetch complete Apr 21 10:09:29.068973 unknown[787]: fetched base config from "system" Apr 21 10:09:29.069347 ignition[787]: fetch: fetch passed Apr 21 10:09:29.068983 unknown[787]: fetched user config from "hetzner" Apr 21 10:09:29.069405 ignition[787]: Ignition finished successfully Apr 21 10:09:29.075309 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:09:29.082814 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:09:29.129528 ignition[794]: Ignition 2.19.0 Apr 21 10:09:29.129551 ignition[794]: Stage: kargs Apr 21 10:09:29.129871 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:29.129893 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:29.131123 ignition[794]: kargs: kargs passed Apr 21 10:09:29.135132 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:09:29.131219 ignition[794]: Ignition finished successfully Apr 21 10:09:29.143864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:09:29.172320 ignition[801]: Ignition 2.19.0 Apr 21 10:09:29.172333 ignition[801]: Stage: disks Apr 21 10:09:29.172518 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:29.172531 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:29.173309 ignition[801]: disks: disks passed Apr 21 10:09:29.173357 ignition[801]: Ignition finished successfully Apr 21 10:09:29.176749 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:09:29.178152 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:09:29.179390 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:09:29.180139 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:09:29.181385 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:09:29.182385 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:09:29.194779 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:09:29.218941 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 21 10:09:29.221359 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:09:29.225636 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:09:29.329910 kernel: EXT4-fs (sda9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:09:29.330006 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:09:29.330920 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:09:29.338645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:09:29.344941 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:09:29.349239 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 21 10:09:29.350257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:09:29.350309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:09:29.354617 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:09:29.357989 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:09:29.369609 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Apr 21 10:09:29.369661 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:29.375617 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:29.375645 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:09:29.386060 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:09:29.386104 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:09:29.390245 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:09:29.413952 coreos-metadata[819]: Apr 21 10:09:29.413 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 21 10:09:29.415203 coreos-metadata[819]: Apr 21 10:09:29.415 INFO Fetch successful Apr 21 10:09:29.416638 coreos-metadata[819]: Apr 21 10:09:29.416 INFO wrote hostname ci-4081-3-7-a-afac96dda8 to /sysroot/etc/hostname Apr 21 10:09:29.418496 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:09:29.420002 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:09:29.423516 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:09:29.427345 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:09:29.431595 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:09:29.524491 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:09:29.529663 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:09:29.535823 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:09:29.548592 kernel: BTRFS info (device sda6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:29.560581 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:09:29.571788 ignition[937]: INFO : Ignition 2.19.0 Apr 21 10:09:29.572416 ignition[937]: INFO : Stage: mount Apr 21 10:09:29.572754 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:29.572754 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:29.573504 ignition[937]: INFO : mount: mount passed Apr 21 10:09:29.573836 ignition[937]: INFO : Ignition finished successfully Apr 21 10:09:29.575497 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:09:29.579665 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:09:29.610972 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:09:29.616695 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:09:29.630595 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (948) Apr 21 10:09:29.638180 kernel: BTRFS info (device sda6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:09:29.638245 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:09:29.638256 kernel: BTRFS info (device sda6): using free space tree Apr 21 10:09:29.648059 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 21 10:09:29.648128 kernel: BTRFS info (device sda6): auto enabling async discard Apr 21 10:09:29.652994 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:09:29.681489 ignition[965]: INFO : Ignition 2.19.0 Apr 21 10:09:29.681489 ignition[965]: INFO : Stage: files Apr 21 10:09:29.682832 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:29.682832 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:29.683756 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:09:29.684345 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:09:29.684345 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:09:29.687070 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:09:29.687690 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:09:29.687690 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:09:29.687543 unknown[965]: wrote ssh authorized keys file for user: core Apr 21 10:09:29.689393 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:09:29.689393 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:09:29.866971 systemd-networkd[780]: eth0: Gained IPv6LL Apr 21 10:09:29.881134 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:09:30.259635 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:09:30.259635 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:30.262822 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:09:30.507122 systemd-networkd[780]: eth1: Gained IPv6LL Apr 21 10:09:30.517732 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:09:30.863012 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:09:30.863012 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:09:30.866114 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:09:30.866114 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:09:30.866114 ignition[965]: INFO : files: files passed Apr 21 10:09:30.866114 ignition[965]: INFO : Ignition finished successfully Apr 21 10:09:30.866271 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:09:30.882330 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:09:30.885091 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:09:30.886256 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:09:30.886723 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:09:30.913898 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:09:30.913898 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:09:30.916460 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:09:30.919005 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:09:30.919895 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:09:30.926787 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:09:30.950758 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:09:30.950953 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:09:30.952836 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:09:30.954262 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:09:30.955169 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:09:30.959774 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:09:30.973218 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:09:30.980788 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:09:30.992694 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:09:30.993627 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:09:30.994139 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:09:30.995196 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:09:30.995279 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:09:30.997015 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:09:30.999261 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:09:31.000026 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:09:31.000723 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:09:31.001739 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:09:31.002771 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:09:31.003848 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:09:31.004919 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:09:31.005997 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:09:31.007084 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:09:31.008327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:09:31.008413 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:09:31.010027 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:09:31.011104 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:09:31.012085 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:09:31.012171 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:09:31.013097 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:09:31.013173 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:09:31.014961 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:09:31.015046 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:09:31.016077 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:09:31.016148 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:09:31.017154 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 21 10:09:31.017232 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 21 10:09:31.027730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:09:31.031688 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:09:31.032079 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:09:31.032199 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:09:31.032877 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:09:31.032977 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:09:31.037732 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:09:31.037835 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:09:31.042579 ignition[1017]: INFO : Ignition 2.19.0 Apr 21 10:09:31.042579 ignition[1017]: INFO : Stage: umount Apr 21 10:09:31.042579 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:09:31.042579 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 21 10:09:31.048237 ignition[1017]: INFO : umount: umount passed Apr 21 10:09:31.048237 ignition[1017]: INFO : Ignition finished successfully Apr 21 10:09:31.045001 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:09:31.045087 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:09:31.045894 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:09:31.045960 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:09:31.046369 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:09:31.046402 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:09:31.048635 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:09:31.048673 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:09:31.049307 systemd[1]: Stopped target network.target - Network. Apr 21 10:09:31.050083 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:09:31.050123 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:09:31.050810 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:09:31.051536 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:09:31.053597 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:09:31.054161 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:09:31.054457 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:09:31.055624 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:09:31.055667 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:09:31.056062 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:09:31.056097 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:09:31.056404 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:09:31.056440 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:09:31.056755 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:09:31.056787 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:09:31.057217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:09:31.059522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:09:31.061260 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:09:31.062656 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 21 10:09:31.063426 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:09:31.063512 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:09:31.064968 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:09:31.065042 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:09:31.066339 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:09:31.066450 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:09:31.067612 systemd-networkd[780]: eth1: DHCPv6 lease lost Apr 21 10:09:31.070011 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:09:31.070130 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:09:31.071908 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:09:31.071967 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:09:31.076637 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:09:31.077770 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:09:31.077817 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:09:31.078507 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:09:31.078544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:09:31.080818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:09:31.080864 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:09:31.081615 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:09:31.081652 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:09:31.082416 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:09:31.093130 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:09:31.093248 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:09:31.097944 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:09:31.098131 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:09:31.099340 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:09:31.099420 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:09:31.100222 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:09:31.100256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:09:31.100984 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:09:31.101025 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:09:31.102278 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:09:31.102317 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:09:31.103525 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:09:31.103583 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:09:31.118745 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:09:31.119199 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:09:31.119253 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:09:31.119746 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:09:31.119786 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:09:31.120271 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:09:31.120311 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:09:31.122630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:31.122676 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:31.125686 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:09:31.125783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:09:31.127670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:09:31.133694 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:09:31.140757 systemd[1]: Switching root. Apr 21 10:09:31.167989 systemd-journald[188]: Journal stopped Apr 21 10:09:32.259360 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Apr 21 10:09:32.259434 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:09:32.259450 kernel: SELinux: policy capability open_perms=1 Apr 21 10:09:32.259459 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:09:32.259474 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:09:32.259482 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:09:32.259491 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:09:32.259499 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:09:32.259507 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:09:32.259515 kernel: audit: type=1403 audit(1776766171.332:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:09:32.259529 systemd[1]: Successfully loaded SELinux policy in 44.964ms. Apr 21 10:09:32.259550 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.518ms. Apr 21 10:09:32.262036 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:09:32.262061 systemd[1]: Detected virtualization kvm. Apr 21 10:09:32.262071 systemd[1]: Detected architecture x86-64. Apr 21 10:09:32.262080 systemd[1]: Detected first boot. Apr 21 10:09:32.262090 systemd[1]: Hostname set to . Apr 21 10:09:32.262099 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:09:32.262110 zram_generator::config[1059]: No configuration found. Apr 21 10:09:32.262120 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:09:32.262129 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:09:32.262138 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:09:32.262147 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:09:32.262156 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:09:32.262170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:09:32.262179 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:09:32.262200 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:09:32.262209 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:09:32.262218 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:09:32.262232 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:09:32.262241 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:09:32.262250 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:09:32.262259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:09:32.262268 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:09:32.262279 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:09:32.262288 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:09:32.262297 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:09:32.262306 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:09:32.262315 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:09:32.262324 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:09:32.262332 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:09:32.262344 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:09:32.262353 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:09:32.262361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:09:32.262375 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:09:32.262388 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:09:32.262400 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:09:32.262409 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:09:32.262418 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:09:32.262427 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:09:32.262439 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:09:32.262447 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:09:32.262456 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:09:32.262465 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:09:32.262474 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:09:32.262482 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:09:32.262493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:32.262501 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:09:32.262510 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:09:32.262520 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:09:32.262530 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:09:32.262538 systemd[1]: Reached target machines.target - Containers. Apr 21 10:09:32.262547 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:09:32.262556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:32.262578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:09:32.262587 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:09:32.262596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:32.262607 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:09:32.262616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:32.262624 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:09:32.262633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:32.262642 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:09:32.262650 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:09:32.262658 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:09:32.262667 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:09:32.262678 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:09:32.262689 kernel: loop: module loaded Apr 21 10:09:32.262699 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:09:32.262707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:09:32.262717 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:09:32.262725 kernel: ACPI: bus type drm_connector registered Apr 21 10:09:32.262734 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:09:32.262742 kernel: fuse: init (API version 7.39) Apr 21 10:09:32.262751 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:09:32.262762 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:09:32.262771 systemd[1]: Stopped verity-setup.service. Apr 21 10:09:32.262780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:32.262789 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:09:32.262797 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:09:32.262808 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:09:32.262817 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:09:32.262825 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:09:32.262834 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:09:32.262843 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:09:32.262851 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:09:32.262864 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:09:32.262872 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:09:32.262899 systemd-journald[1142]: Collecting audit messages is disabled. Apr 21 10:09:32.262917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:32.262926 systemd-journald[1142]: Journal started Apr 21 10:09:32.262944 systemd-journald[1142]: Runtime Journal (/run/log/journal/76e722b314294d6a9fca7cf84ec27d9f) is 8.0M, max 76.3M, 68.3M free. Apr 21 10:09:31.927869 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:09:31.947981 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 21 10:09:31.948499 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:09:32.267450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:32.267477 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:09:32.269547 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:09:32.269713 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:09:32.270424 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:32.270650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:32.271316 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:09:32.271497 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:09:32.272209 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:32.272436 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:32.273387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:09:32.274203 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:09:32.275054 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:09:32.284943 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:09:32.292670 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:09:32.298091 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:09:32.298512 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:09:32.298544 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:09:32.301684 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:09:32.306068 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:09:32.307956 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:09:32.308503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:09:32.310691 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:09:32.313694 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:09:32.314067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:09:32.316665 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:09:32.317051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:09:32.319165 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:09:32.321773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:09:32.325685 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:09:32.333705 systemd-journald[1142]: Time spent on flushing to /var/log/journal/76e722b314294d6a9fca7cf84ec27d9f is 20.308ms for 1175 entries. Apr 21 10:09:32.333705 systemd-journald[1142]: System Journal (/var/log/journal/76e722b314294d6a9fca7cf84ec27d9f) is 8.0M, max 584.8M, 576.8M free. Apr 21 10:09:32.373362 systemd-journald[1142]: Received client request to flush runtime journal. Apr 21 10:09:32.329087 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:09:32.329886 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:09:32.331340 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:09:32.341236 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:09:32.350965 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:09:32.376320 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:09:32.394491 kernel: loop0: detected capacity change from 0 to 8 Apr 21 10:09:32.385764 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:09:32.389494 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:09:32.390064 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:09:32.405911 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:09:32.402087 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:09:32.414994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:09:32.435054 kernel: loop1: detected capacity change from 0 to 228704 Apr 21 10:09:32.436142 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 21 10:09:32.436156 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 21 10:09:32.448181 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:09:32.448927 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:09:32.450157 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:09:32.459775 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:09:32.489647 kernel: loop2: detected capacity change from 0 to 140768 Apr 21 10:09:32.509393 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:09:32.520760 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:09:32.535509 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 21 10:09:32.535527 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Apr 21 10:09:32.536588 kernel: loop3: detected capacity change from 0 to 142488 Apr 21 10:09:32.540699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:09:32.582596 kernel: loop4: detected capacity change from 0 to 8 Apr 21 10:09:32.588416 kernel: loop5: detected capacity change from 0 to 228704 Apr 21 10:09:32.610604 kernel: loop6: detected capacity change from 0 to 140768 Apr 21 10:09:32.626590 kernel: loop7: detected capacity change from 0 to 142488 Apr 21 10:09:32.642438 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 21 10:09:32.643710 (sd-merge)[1210]: Merged extensions into '/usr'. Apr 21 10:09:32.650014 systemd[1]: Reloading requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:09:32.650369 systemd[1]: Reloading... Apr 21 10:09:32.734760 zram_generator::config[1242]: No configuration found. Apr 21 10:09:32.770756 ldconfig[1174]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:09:32.854866 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:09:32.903253 systemd[1]: Reloading finished in 252 ms. Apr 21 10:09:32.931320 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:09:32.932059 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:09:32.932719 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:09:32.941711 systemd[1]: Starting ensure-sysext.service... Apr 21 10:09:32.943709 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:09:32.947709 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:09:32.952701 systemd[1]: Reloading requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:09:32.952715 systemd[1]: Reloading... Apr 21 10:09:32.978351 systemd-udevd[1282]: Using default interface naming scheme 'v255'. Apr 21 10:09:32.986881 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:09:32.987801 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:09:32.988785 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:09:32.989225 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 21 10:09:32.989417 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 21 10:09:32.993530 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:09:32.993717 systemd-tmpfiles[1281]: Skipping /boot Apr 21 10:09:33.010133 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:09:33.010374 systemd-tmpfiles[1281]: Skipping /boot Apr 21 10:09:33.023644 zram_generator::config[1309]: No configuration found. Apr 21 10:09:33.171585 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1324) Apr 21 10:09:33.191093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:09:33.197620 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:09:33.218584 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:09:33.235922 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:09:33.265987 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:09:33.268608 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Apr 21 10:09:33.268629 systemd[1]: Reloading finished in 315 ms. Apr 21 10:09:33.290220 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:09:33.295989 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:09:33.297436 kernel: Console: switching to colour dummy device 80x25 Apr 21 10:09:33.305434 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Apr 21 10:09:33.305683 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 21 10:09:33.305698 kernel: [drm] features: -context_init Apr 21 10:09:33.320580 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:09:33.324614 kernel: [drm] number of scanouts: 1 Apr 21 10:09:33.324660 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Apr 21 10:09:33.330655 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 10:09:33.330866 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 21 10:09:33.331048 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 10:09:33.331180 kernel: [drm] number of cap sets: 0 Apr 21 10:09:33.333577 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 21 10:09:33.345620 kernel: EDAC MC: Ver: 3.0.0 Apr 21 10:09:33.349688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 21 10:09:33.351078 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Apr 21 10:09:33.351112 kernel: Console: switching to colour frame buffer device 160x50 Apr 21 10:09:33.378232 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 21 10:09:33.380138 systemd[1]: Finished ensure-sysext.service. Apr 21 10:09:33.389620 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 21 10:09:33.398825 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:33.402703 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:09:33.405236 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:09:33.407782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:09:33.409785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:09:33.411714 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:09:33.421745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:09:33.429790 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:09:33.430644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:09:33.432704 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:09:33.436744 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:09:33.439447 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:09:33.450532 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:09:33.454396 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 10:09:33.456781 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:09:33.466482 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:33.467277 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:09:33.468335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:09:33.469534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:09:33.471867 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:09:33.472037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:09:33.472911 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:09:33.477871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:09:33.479322 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:09:33.480636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:09:33.481039 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:09:33.491756 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:09:33.491960 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:09:33.503917 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:09:33.505102 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:09:33.509993 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:09:33.514695 augenrules[1432]: No rules Apr 21 10:09:33.517257 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:09:33.523689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:09:33.524025 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:33.545754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:09:33.546708 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:09:33.547374 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:09:33.552991 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:09:33.563070 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:09:33.570897 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:09:33.612158 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:09:33.621490 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:09:33.623758 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:09:33.659056 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:09:33.660482 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:09:33.670828 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:09:33.680432 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 10:09:33.681049 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:09:33.691863 systemd-resolved[1417]: Positive Trust Anchors: Apr 21 10:09:33.692143 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:09:33.692210 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:09:33.693609 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:09:33.699625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:09:33.704867 systemd-networkd[1416]: lo: Link UP Apr 21 10:09:33.704879 systemd-networkd[1416]: lo: Gained carrier Apr 21 10:09:33.706443 systemd-resolved[1417]: Using system hostname 'ci-4081-3-7-a-afac96dda8'. Apr 21 10:09:33.708872 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:09:33.709654 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:09:33.709891 systemd-networkd[1416]: Enumeration completed Apr 21 10:09:33.710068 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:09:33.710344 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:33.710348 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:33.711459 systemd-networkd[1416]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:33.711468 systemd-networkd[1416]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:09:33.711845 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:09:33.712102 systemd-networkd[1416]: eth0: Link UP Apr 21 10:09:33.712111 systemd-networkd[1416]: eth0: Gained carrier Apr 21 10:09:33.712126 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:33.714223 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:09:33.714961 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:09:33.716006 systemd-networkd[1416]: eth1: Link UP Apr 21 10:09:33.716012 systemd-networkd[1416]: eth1: Gained carrier Apr 21 10:09:33.716027 systemd-networkd[1416]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:09:33.717533 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:09:33.718925 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:09:33.719314 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:09:33.719353 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:09:33.719804 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:09:33.721517 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:09:33.723784 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:09:33.728611 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:09:33.730308 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:09:33.731146 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:09:33.733304 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:09:33.734690 systemd[1]: Reached target network.target - Network. Apr 21 10:09:33.735149 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:09:33.737859 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:09:33.739225 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:09:33.739275 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:09:33.744650 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:09:33.747066 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:09:33.750683 systemd-networkd[1416]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 21 10:09:33.752554 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Apr 21 10:09:33.756842 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:09:33.760779 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:09:33.763724 systemd-networkd[1416]: eth0: DHCPv4 address 46.62.167.141/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 21 10:09:33.764361 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Apr 21 10:09:33.764554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:09:33.765918 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:09:33.768730 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:09:33.774015 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:09:33.777715 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 21 10:09:33.787305 coreos-metadata[1466]: Apr 21 10:09:33.787 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 21 10:09:33.788604 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:09:33.798105 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:09:33.800131 dbus-daemon[1467]: [system] SELinux support is enabled Apr 21 10:09:33.805756 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:09:33.809509 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:09:33.809995 coreos-metadata[1466]: Apr 21 10:09:33.809 INFO Fetch successful Apr 21 10:09:33.809995 coreos-metadata[1466]: Apr 21 10:09:33.809 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 21 10:09:33.811248 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:09:33.812047 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:09:33.818753 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:09:33.819063 coreos-metadata[1466]: Apr 21 10:09:33.818 INFO Fetch successful Apr 21 10:09:33.821549 jq[1468]: false Apr 21 10:09:33.821979 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:09:33.823002 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:09:33.829555 jq[1482]: true Apr 21 10:09:33.833735 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:09:33.833917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:09:33.845019 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:09:33.845204 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:09:33.857928 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:09:33.858119 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:09:33.862370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:09:33.862419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:09:33.864413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:09:33.864444 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:09:33.869489 update_engine[1480]: I20260421 10:09:33.869410 1480 main.cc:92] Flatcar Update Engine starting Apr 21 10:09:33.874811 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:09:33.875055 update_engine[1480]: I20260421 10:09:33.874858 1480 update_check_scheduler.cc:74] Next update check in 7m35s Apr 21 10:09:33.882924 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:09:33.906805 jq[1488]: true Apr 21 10:09:33.910739 extend-filesystems[1471]: Found loop4 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found loop5 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found loop6 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found loop7 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda1 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda2 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda3 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found usr Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda4 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda6 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda7 Apr 21 10:09:33.916357 extend-filesystems[1471]: Found sda9 Apr 21 10:09:33.916357 extend-filesystems[1471]: Checking size of /dev/sda9 Apr 21 10:09:33.911971 systemd-logind[1477]: New seat seat0. Apr 21 10:09:33.946747 tar[1486]: linux-amd64/LICENSE Apr 21 10:09:33.946747 tar[1486]: linux-amd64/helm Apr 21 10:09:33.913166 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 10:09:33.913194 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:09:33.913967 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:09:33.938957 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:09:33.971539 extend-filesystems[1471]: Resized partition /dev/sda9 Apr 21 10:09:33.983586 extend-filesystems[1524]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:09:34.000730 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Apr 21 10:09:33.999088 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:09:33.999847 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:09:34.065596 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1333) Apr 21 10:09:34.089935 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:09:34.104697 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:09:34.106284 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:09:34.123889 systemd[1]: Starting sshkeys.service... Apr 21 10:09:34.171409 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:09:34.179908 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:09:34.195372 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:09:34.207288 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:09:34.219438 containerd[1504]: time="2026-04-21T10:09:34.219350331Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:09:34.222595 coreos-metadata[1550]: Apr 21 10:09:34.222 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 21 10:09:34.223986 coreos-metadata[1550]: Apr 21 10:09:34.223 INFO Fetch successful Apr 21 10:09:34.234128 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:09:34.234340 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:09:34.244124 containerd[1504]: time="2026-04-21T10:09:34.243687848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.245042 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:09:34.246524 unknown[1550]: wrote ssh authorized keys file for user: core Apr 21 10:09:34.250880 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.250687373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.250718349Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.250735535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.252782607Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.252806723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.252882968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.252894305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.253114025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.253128426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.253139032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:34.254813 containerd[1504]: time="2026-04-21T10:09:34.253146473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.255053 containerd[1504]: time="2026-04-21T10:09:34.253248386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.257371 containerd[1504]: time="2026-04-21T10:09:34.255745043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:09:34.257371 containerd[1504]: time="2026-04-21T10:09:34.255888899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:09:34.257371 containerd[1504]: time="2026-04-21T10:09:34.255900667Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:09:34.257371 containerd[1504]: time="2026-04-21T10:09:34.255981649Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:09:34.257371 containerd[1504]: time="2026-04-21T10:09:34.256024813Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:09:34.264194 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:09:34.278135 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:09:34.287870 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:09:34.288436 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:09:34.308250 containerd[1504]: time="2026-04-21T10:09:34.308202573Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:09:34.308426 containerd[1504]: time="2026-04-21T10:09:34.308414411Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:09:34.308731 containerd[1504]: time="2026-04-21T10:09:34.308601903Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:09:34.308731 containerd[1504]: time="2026-04-21T10:09:34.308622263Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:09:34.308731 containerd[1504]: time="2026-04-21T10:09:34.308644066Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:09:34.309072 containerd[1504]: time="2026-04-21T10:09:34.309058598Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:09:34.309457 containerd[1504]: time="2026-04-21T10:09:34.309444978Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:09:34.314099 containerd[1504]: time="2026-04-21T10:09:34.314084000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:09:34.314213 containerd[1504]: time="2026-04-21T10:09:34.314161637Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:09:34.314213 containerd[1504]: time="2026-04-21T10:09:34.314175638Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:09:34.314602 containerd[1504]: time="2026-04-21T10:09:34.314193885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.314602 containerd[1504]: time="2026-04-21T10:09:34.314354046Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.314602 containerd[1504]: time="2026-04-21T10:09:34.314372233Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314635398Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314652594Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314670551Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314680356Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314691322Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314715679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314726044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314736380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314747667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314757792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314768207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314776780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314787016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318763 containerd[1504]: time="2026-04-21T10:09:34.314797361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314808498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314817381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314825874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314843080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314854467Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314869930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314878633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314886445Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314926495Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314940786Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314949730Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314958142Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:09:34.318993 containerd[1504]: time="2026-04-21T10:09:34.314964863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.319206 containerd[1504]: time="2026-04-21T10:09:34.314973155Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:09:34.319206 containerd[1504]: time="2026-04-21T10:09:34.314991773Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:09:34.319206 containerd[1504]: time="2026-04-21T10:09:34.315000075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:09:34.319259 containerd[1504]: time="2026-04-21T10:09:34.315232564Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:09:34.319259 containerd[1504]: time="2026-04-21T10:09:34.315279535Z" level=info msg="Connect containerd service" Apr 21 10:09:34.319259 containerd[1504]: time="2026-04-21T10:09:34.315317001Z" level=info msg="using legacy CRI server" Apr 21 10:09:34.319259 containerd[1504]: time="2026-04-21T10:09:34.315322910Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:09:34.319259 containerd[1504]: time="2026-04-21T10:09:34.315395118Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:09:34.320224 containerd[1504]: time="2026-04-21T10:09:34.319823334Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:09:34.320224 containerd[1504]: time="2026-04-21T10:09:34.320050355Z" level=info msg="Start subscribing containerd event" Apr 21 10:09:34.320224 containerd[1504]: time="2026-04-21T10:09:34.320142523Z" level=info msg="Start recovering state" Apr 21 10:09:34.320462 containerd[1504]: time="2026-04-21T10:09:34.320348192Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:09:34.320462 containerd[1504]: time="2026-04-21T10:09:34.320424196Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:09:34.320808 containerd[1504]: time="2026-04-21T10:09:34.320785729Z" level=info msg="Start event monitor" Apr 21 10:09:34.320835 containerd[1504]: time="2026-04-21T10:09:34.320819539Z" level=info msg="Start snapshots syncer" Apr 21 10:09:34.320835 containerd[1504]: time="2026-04-21T10:09:34.320829314Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:09:34.320863 containerd[1504]: time="2026-04-21T10:09:34.320836485Z" level=info msg="Start streaming server" Apr 21 10:09:34.321232 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:09:34.321657 containerd[1504]: time="2026-04-21T10:09:34.321418759Z" level=info msg="containerd successfully booted in 0.103157s" Apr 21 10:09:34.333934 update-ssh-keys[1569]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:09:34.335431 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:09:34.340667 systemd[1]: Finished sshkeys.service. Apr 21 10:09:34.369689 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Apr 21 10:09:34.405919 extend-filesystems[1524]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 21 10:09:34.405919 extend-filesystems[1524]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 21 10:09:34.405919 extend-filesystems[1524]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Apr 21 10:09:34.412437 extend-filesystems[1471]: Resized filesystem in /dev/sda9 Apr 21 10:09:34.412437 extend-filesystems[1471]: Found sr0 Apr 21 10:09:34.409732 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:09:34.409973 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:09:34.614647 tar[1486]: linux-amd64/README.md Apr 21 10:09:34.625072 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:09:35.178873 systemd-networkd[1416]: eth0: Gained IPv6LL Apr 21 10:09:35.179539 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Apr 21 10:09:35.182192 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:09:35.185346 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:09:35.195907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:35.200844 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:09:35.236709 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:09:35.627994 systemd-networkd[1416]: eth1: Gained IPv6LL Apr 21 10:09:35.630532 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Apr 21 10:09:36.032859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:36.035957 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:09:36.036296 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:09:36.038131 systemd[1]: Startup finished in 1.381s (kernel) + 5.615s (initrd) + 4.749s (userspace) = 11.746s. Apr 21 10:09:36.636389 kubelet[1596]: E0421 10:09:36.636328 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:09:36.641824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:09:36.642025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:09:39.654816 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:09:39.661778 systemd[1]: Started sshd@0-46.62.167.141:22-50.85.169.122:43536.service - OpenSSH per-connection server daemon (50.85.169.122:43536). Apr 21 10:09:39.889624 sshd[1608]: Accepted publickey for core from 50.85.169.122 port 43536 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:39.893073 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:39.912268 systemd-logind[1477]: New session 1 of user core. Apr 21 10:09:39.915541 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:09:39.923043 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:09:39.957098 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:09:39.967012 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:09:39.977697 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:09:40.106136 systemd[1612]: Queued start job for default target default.target. Apr 21 10:09:40.114650 systemd[1612]: Created slice app.slice - User Application Slice. Apr 21 10:09:40.114670 systemd[1612]: Reached target paths.target - Paths. Apr 21 10:09:40.114682 systemd[1612]: Reached target timers.target - Timers. Apr 21 10:09:40.116028 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:09:40.131205 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:09:40.131400 systemd[1612]: Reached target sockets.target - Sockets. Apr 21 10:09:40.131429 systemd[1612]: Reached target basic.target - Basic System. Apr 21 10:09:40.131489 systemd[1612]: Reached target default.target - Main User Target. Apr 21 10:09:40.131553 systemd[1612]: Startup finished in 141ms. Apr 21 10:09:40.131738 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:09:40.140733 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:09:40.324247 systemd[1]: Started sshd@1-46.62.167.141:22-50.85.169.122:43550.service - OpenSSH per-connection server daemon (50.85.169.122:43550). Apr 21 10:09:40.533623 sshd[1623]: Accepted publickey for core from 50.85.169.122 port 43550 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:40.536096 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:40.544027 systemd-logind[1477]: New session 2 of user core. Apr 21 10:09:40.552816 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:09:40.707203 sshd[1623]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:40.710937 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:09:40.712739 systemd[1]: sshd@1-46.62.167.141:22-50.85.169.122:43550.service: Deactivated successfully. Apr 21 10:09:40.714470 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:09:40.717166 systemd-logind[1477]: Removed session 2. Apr 21 10:09:40.745753 systemd[1]: Started sshd@2-46.62.167.141:22-50.85.169.122:43566.service - OpenSSH per-connection server daemon (50.85.169.122:43566). Apr 21 10:09:40.956609 sshd[1630]: Accepted publickey for core from 50.85.169.122 port 43566 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:40.958946 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:40.969627 systemd-logind[1477]: New session 3 of user core. Apr 21 10:09:40.975875 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:09:41.120488 sshd[1630]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:41.127091 systemd[1]: sshd@2-46.62.167.141:22-50.85.169.122:43566.service: Deactivated successfully. Apr 21 10:09:41.130013 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:09:41.130993 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:09:41.132660 systemd-logind[1477]: Removed session 3. Apr 21 10:09:41.169166 systemd[1]: Started sshd@3-46.62.167.141:22-50.85.169.122:43582.service - OpenSSH per-connection server daemon (50.85.169.122:43582). Apr 21 10:09:41.389801 sshd[1637]: Accepted publickey for core from 50.85.169.122 port 43582 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:41.390980 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:41.396659 systemd-logind[1477]: New session 4 of user core. Apr 21 10:09:41.405833 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:09:41.562733 sshd[1637]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:41.569483 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:09:41.570689 systemd[1]: sshd@3-46.62.167.141:22-50.85.169.122:43582.service: Deactivated successfully. Apr 21 10:09:41.573963 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:09:41.575367 systemd-logind[1477]: Removed session 4. Apr 21 10:09:41.613375 systemd[1]: Started sshd@4-46.62.167.141:22-50.85.169.122:43594.service - OpenSSH per-connection server daemon (50.85.169.122:43594). Apr 21 10:09:41.824134 sshd[1644]: Accepted publickey for core from 50.85.169.122 port 43594 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:41.825989 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:41.829757 systemd-logind[1477]: New session 5 of user core. Apr 21 10:09:41.836690 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:09:41.968792 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:09:41.969130 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:41.984547 sudo[1647]: pam_unix(sudo:session): session closed for user root Apr 21 10:09:42.015748 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:42.019550 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:09:42.020200 systemd[1]: sshd@4-46.62.167.141:22-50.85.169.122:43594.service: Deactivated successfully. Apr 21 10:09:42.021923 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:09:42.022922 systemd-logind[1477]: Removed session 5. Apr 21 10:09:42.066064 systemd[1]: Started sshd@5-46.62.167.141:22-50.85.169.122:43604.service - OpenSSH per-connection server daemon (50.85.169.122:43604). Apr 21 10:09:42.267622 sshd[1652]: Accepted publickey for core from 50.85.169.122 port 43604 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:42.269777 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:42.276677 systemd-logind[1477]: New session 6 of user core. Apr 21 10:09:42.287834 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:09:42.409935 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:09:42.410657 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:42.415926 sudo[1656]: pam_unix(sudo:session): session closed for user root Apr 21 10:09:42.426949 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:09:42.427660 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:42.447771 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:09:42.449769 auditctl[1659]: No rules Apr 21 10:09:42.449419 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:09:42.449614 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:09:42.452473 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:09:42.479433 augenrules[1677]: No rules Apr 21 10:09:42.480830 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:09:42.482274 sudo[1655]: pam_unix(sudo:session): session closed for user root Apr 21 10:09:42.513415 sshd[1652]: pam_unix(sshd:session): session closed for user core Apr 21 10:09:42.518422 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:09:42.519778 systemd[1]: sshd@5-46.62.167.141:22-50.85.169.122:43604.service: Deactivated successfully. Apr 21 10:09:42.522093 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:09:42.523386 systemd-logind[1477]: Removed session 6. Apr 21 10:09:42.561998 systemd[1]: Started sshd@6-46.62.167.141:22-50.85.169.122:43612.service - OpenSSH per-connection server daemon (50.85.169.122:43612). Apr 21 10:09:42.772697 sshd[1685]: Accepted publickey for core from 50.85.169.122 port 43612 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:09:42.774231 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:09:42.782529 systemd-logind[1477]: New session 7 of user core. Apr 21 10:09:42.791915 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:09:42.914777 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:09:42.915477 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:09:43.204806 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:09:43.222201 (dockerd)[1705]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:09:43.536364 dockerd[1705]: time="2026-04-21T10:09:43.535951736Z" level=info msg="Starting up" Apr 21 10:09:43.704525 dockerd[1705]: time="2026-04-21T10:09:43.704260390Z" level=info msg="Loading containers: start." Apr 21 10:09:43.951763 kernel: Initializing XFRM netlink socket Apr 21 10:09:43.984677 systemd-timesyncd[1419]: Network configuration changed, trying to establish connection. Apr 21 10:09:44.061886 systemd-networkd[1416]: docker0: Link UP Apr 21 10:09:44.079996 dockerd[1705]: time="2026-04-21T10:09:44.079940813Z" level=info msg="Loading containers: done." Apr 21 10:09:44.121711 dockerd[1705]: time="2026-04-21T10:09:44.121627578Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:09:44.122058 dockerd[1705]: time="2026-04-21T10:09:44.121849481Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:09:44.122186 dockerd[1705]: time="2026-04-21T10:09:44.122132015Z" level=info msg="Daemon has completed initialization" Apr 21 10:09:45.007217 systemd-timesyncd[1419]: Contacted time server 217.144.138.234:123 (2.flatcar.pool.ntp.org). Apr 21 10:09:45.007341 systemd-timesyncd[1419]: Initial clock synchronization to Tue 2026-04-21 10:09:45.005833 UTC. Apr 21 10:09:45.007466 systemd-resolved[1417]: Clock change detected. Flushing caches. Apr 21 10:09:45.013979 dockerd[1705]: time="2026-04-21T10:09:45.013883782Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:09:45.014508 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:09:46.016619 containerd[1504]: time="2026-04-21T10:09:46.016536797Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:09:47.255743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994941377.mount: Deactivated successfully. Apr 21 10:09:47.717984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:09:47.727924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:47.941455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:47.946486 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:09:47.992625 kubelet[1876]: E0421 10:09:47.992312 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:09:47.997316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:09:47.997557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:09:49.236229 containerd[1504]: time="2026-04-21T10:09:49.236105613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:49.356447 containerd[1504]: time="2026-04-21T10:09:49.356105623Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30194089" Apr 21 10:09:49.512903 containerd[1504]: time="2026-04-21T10:09:49.512703480Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:49.527167 containerd[1504]: time="2026-04-21T10:09:49.527103181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:49.528709 containerd[1504]: time="2026-04-21T10:09:49.528483954Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 3.511894408s" Apr 21 10:09:49.528709 containerd[1504]: time="2026-04-21T10:09:49.528519447Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:09:49.529220 containerd[1504]: time="2026-04-21T10:09:49.529198967Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:09:51.082478 containerd[1504]: time="2026-04-21T10:09:51.082430568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:51.083892 containerd[1504]: time="2026-04-21T10:09:51.083864340Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171469" Apr 21 10:09:51.085039 containerd[1504]: time="2026-04-21T10:09:51.085006124Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:51.087798 containerd[1504]: time="2026-04-21T10:09:51.087764874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:51.088709 containerd[1504]: time="2026-04-21T10:09:51.088478596Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.559253029s" Apr 21 10:09:51.088709 containerd[1504]: time="2026-04-21T10:09:51.088524244Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:09:51.089693 containerd[1504]: time="2026-04-21T10:09:51.089655362Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:09:52.566486 containerd[1504]: time="2026-04-21T10:09:52.566428502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:52.575408 containerd[1504]: time="2026-04-21T10:09:52.575355310Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289778" Apr 21 10:09:52.578985 containerd[1504]: time="2026-04-21T10:09:52.578950636Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:52.588165 containerd[1504]: time="2026-04-21T10:09:52.587866557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:52.588868 containerd[1504]: time="2026-04-21T10:09:52.588750393Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.49905389s" Apr 21 10:09:52.588868 containerd[1504]: time="2026-04-21T10:09:52.588773178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:09:52.589551 containerd[1504]: time="2026-04-21T10:09:52.589527700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:09:54.215631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829263175.mount: Deactivated successfully. Apr 21 10:09:54.881701 containerd[1504]: time="2026-04-21T10:09:54.881603534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:54.883405 containerd[1504]: time="2026-04-21T10:09:54.883277145Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010739" Apr 21 10:09:54.884430 containerd[1504]: time="2026-04-21T10:09:54.884285359Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:54.897488 containerd[1504]: time="2026-04-21T10:09:54.897445089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:54.898005 containerd[1504]: time="2026-04-21T10:09:54.897863697Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 2.308306874s" Apr 21 10:09:54.898005 containerd[1504]: time="2026-04-21T10:09:54.897890378Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:09:54.898499 containerd[1504]: time="2026-04-21T10:09:54.898471049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:09:55.820564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688119670.mount: Deactivated successfully. Apr 21 10:09:57.936025 containerd[1504]: time="2026-04-21T10:09:57.935943829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:57.940038 containerd[1504]: time="2026-04-21T10:09:57.939532986Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Apr 21 10:09:57.943492 containerd[1504]: time="2026-04-21T10:09:57.942229904Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:57.948619 containerd[1504]: time="2026-04-21T10:09:57.947780053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:57.950528 containerd[1504]: time="2026-04-21T10:09:57.950474017Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.05191825s" Apr 21 10:09:57.950703 containerd[1504]: time="2026-04-21T10:09:57.950670512Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:09:57.952186 containerd[1504]: time="2026-04-21T10:09:57.952122220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:09:58.248315 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:09:58.255930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:09:58.432673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:09:58.437084 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:09:58.493198 kubelet[1995]: E0421 10:09:58.493100 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:09:58.496529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:09:58.496771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:09:58.730523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383149266.mount: Deactivated successfully. Apr 21 10:09:58.820305 containerd[1504]: time="2026-04-21T10:09:58.820207952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:58.824284 containerd[1504]: time="2026-04-21T10:09:58.823901295Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Apr 21 10:09:58.827625 containerd[1504]: time="2026-04-21T10:09:58.825937411Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:58.832714 containerd[1504]: time="2026-04-21T10:09:58.832665789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:09:58.834653 containerd[1504]: time="2026-04-21T10:09:58.834514915Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 882.157812ms" Apr 21 10:09:58.834653 containerd[1504]: time="2026-04-21T10:09:58.834616888Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:09:58.835432 containerd[1504]: time="2026-04-21T10:09:58.835357589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:09:59.770500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108712736.mount: Deactivated successfully. Apr 21 10:10:01.718773 containerd[1504]: time="2026-04-21T10:10:01.718688178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:01.720865 containerd[1504]: time="2026-04-21T10:10:01.720612175Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719532" Apr 21 10:10:01.729783 containerd[1504]: time="2026-04-21T10:10:01.729405613Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:01.738129 containerd[1504]: time="2026-04-21T10:10:01.738086111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:01.739884 containerd[1504]: time="2026-04-21T10:10:01.739836939Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.904428883s" Apr 21 10:10:01.739884 containerd[1504]: time="2026-04-21T10:10:01.739871951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:10:04.448976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:10:04.456917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:10:04.485951 systemd[1]: Reloading requested from client PID 2097 ('systemctl') (unit session-7.scope)... Apr 21 10:10:04.485966 systemd[1]: Reloading... Apr 21 10:10:04.600612 zram_generator::config[2137]: No configuration found. Apr 21 10:10:04.679858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:10:04.741865 systemd[1]: Reloading finished in 255 ms. Apr 21 10:10:04.793148 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:10:04.793224 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:10:04.793425 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:10:04.795147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:10:04.925392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:10:04.932858 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:10:04.957904 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:10:04.957904 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:10:04.957904 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:10:04.958216 kubelet[2192]: I0421 10:10:04.957947 2192 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:10:05.455163 kubelet[2192]: I0421 10:10:05.455077 2192 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:10:05.455163 kubelet[2192]: I0421 10:10:05.455102 2192 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:10:05.455387 kubelet[2192]: I0421 10:10:05.455282 2192 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:10:05.479609 kubelet[2192]: E0421 10:10:05.478241 2192 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.167.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:10:05.481523 kubelet[2192]: I0421 10:10:05.481476 2192 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:10:05.491400 kubelet[2192]: E0421 10:10:05.491357 2192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:10:05.491549 kubelet[2192]: I0421 10:10:05.491530 2192 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:10:05.498773 kubelet[2192]: I0421 10:10:05.498745 2192 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:10:05.500178 kubelet[2192]: I0421 10:10:05.500124 2192 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:10:05.500526 kubelet[2192]: I0421 10:10:05.500290 2192 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-a-afac96dda8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:10:05.500754 kubelet[2192]: I0421 10:10:05.500737 2192 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:10:05.500832 kubelet[2192]: I0421 10:10:05.500818 2192 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:10:05.501118 kubelet[2192]: I0421 10:10:05.501099 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:10:05.508452 kubelet[2192]: I0421 10:10:05.508415 2192 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:10:05.508673 kubelet[2192]: I0421 10:10:05.508649 2192 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:10:05.508835 kubelet[2192]: I0421 10:10:05.508817 2192 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:10:05.511528 kubelet[2192]: I0421 10:10:05.511499 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:10:05.517728 kubelet[2192]: E0421 10:10:05.517683 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.167.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-a-afac96dda8&limit=500&resourceVersion=0\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:10:05.520103 kubelet[2192]: E0421 10:10:05.519023 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.167.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:10:05.520103 kubelet[2192]: I0421 10:10:05.519122 2192 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:10:05.520103 kubelet[2192]: I0421 10:10:05.519516 2192 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:10:05.521539 kubelet[2192]: W0421 10:10:05.520729 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:10:05.527975 kubelet[2192]: I0421 10:10:05.527945 2192 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:10:05.528736 kubelet[2192]: I0421 10:10:05.528184 2192 server.go:1289] "Started kubelet" Apr 21 10:10:05.530631 kubelet[2192]: I0421 10:10:05.530316 2192 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:10:05.531750 kubelet[2192]: I0421 10:10:05.531266 2192 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:10:05.536406 kubelet[2192]: I0421 10:10:05.535533 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:10:05.537766 kubelet[2192]: E0421 10:10:05.536214 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.167.141:6443/api/v1/namespaces/default/events\": dial tcp 46.62.167.141:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-a-afac96dda8.18a857734169af76 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-a-afac96dda8,UID:ci-4081-3-7-a-afac96dda8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-a-afac96dda8,},FirstTimestamp:2026-04-21 10:10:05.527961462 +0000 UTC m=+0.592049288,LastTimestamp:2026-04-21 10:10:05.527961462 +0000 UTC m=+0.592049288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-a-afac96dda8,}" Apr 21 10:10:05.540598 kubelet[2192]: I0421 10:10:05.534089 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:10:05.540598 kubelet[2192]: I0421 10:10:05.539404 2192 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:10:05.540598 kubelet[2192]: I0421 10:10:05.539560 2192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:10:05.543688 kubelet[2192]: E0421 10:10:05.543676 2192 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:10:05.543918 kubelet[2192]: E0421 10:10:05.543908 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:05.543979 kubelet[2192]: I0421 10:10:05.543972 2192 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:10:05.544167 kubelet[2192]: I0421 10:10:05.544156 2192 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:10:05.544234 kubelet[2192]: I0421 10:10:05.544228 2192 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:10:05.545709 kubelet[2192]: I0421 10:10:05.545694 2192 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:10:05.545829 kubelet[2192]: I0421 10:10:05.545817 2192 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:10:05.546220 kubelet[2192]: E0421 10:10:05.546200 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.167.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:10:05.546960 kubelet[2192]: I0421 10:10:05.546948 2192 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:10:05.551900 kubelet[2192]: E0421 10:10:05.551882 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.167.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-afac96dda8?timeout=10s\": dial tcp 46.62.167.141:6443: connect: connection refused" interval="200ms" Apr 21 10:10:05.558687 kubelet[2192]: I0421 10:10:05.558644 2192 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:10:05.559711 kubelet[2192]: I0421 10:10:05.559685 2192 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:10:05.559711 kubelet[2192]: I0421 10:10:05.559702 2192 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:10:05.559771 kubelet[2192]: I0421 10:10:05.559724 2192 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:10:05.559771 kubelet[2192]: I0421 10:10:05.559731 2192 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:10:05.559771 kubelet[2192]: E0421 10:10:05.559767 2192 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:10:05.568050 kubelet[2192]: E0421 10:10:05.568020 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.167.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:10:05.571458 kubelet[2192]: I0421 10:10:05.571434 2192 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:10:05.571458 kubelet[2192]: I0421 10:10:05.571451 2192 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:10:05.571526 kubelet[2192]: I0421 10:10:05.571467 2192 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:10:05.575314 kubelet[2192]: I0421 10:10:05.575284 2192 policy_none.go:49] "None policy: Start" Apr 21 10:10:05.575314 kubelet[2192]: I0421 10:10:05.575302 2192 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:10:05.575371 kubelet[2192]: I0421 10:10:05.575318 2192 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:10:05.581938 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:10:05.592508 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:10:05.595531 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:10:05.606024 kubelet[2192]: E0421 10:10:05.605989 2192 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:10:05.606208 kubelet[2192]: I0421 10:10:05.606183 2192 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:10:05.606232 kubelet[2192]: I0421 10:10:05.606197 2192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:10:05.606724 kubelet[2192]: I0421 10:10:05.606698 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:10:05.608603 kubelet[2192]: E0421 10:10:05.608451 2192 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:10:05.608603 kubelet[2192]: E0421 10:10:05.608489 2192 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:05.687732 systemd[1]: Created slice kubepods-burstable-pod84d7926e3f6ea01b1c7772dc3f09babd.slice - libcontainer container kubepods-burstable-pod84d7926e3f6ea01b1c7772dc3f09babd.slice. Apr 21 10:10:05.693809 kubelet[2192]: E0421 10:10:05.693431 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.698259 systemd[1]: Created slice kubepods-burstable-pod8bb673ff5e79650010684fc302d818ed.slice - libcontainer container kubepods-burstable-pod8bb673ff5e79650010684fc302d818ed.slice. Apr 21 10:10:05.699845 kubelet[2192]: E0421 10:10:05.699806 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.708481 kubelet[2192]: I0421 10:10:05.708405 2192 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.708956 kubelet[2192]: E0421 10:10:05.708920 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.167.141:6443/api/v1/nodes\": dial tcp 46.62.167.141:6443: connect: connection refused" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.720589 systemd[1]: Created slice kubepods-burstable-pod4a41200866e94006003070a63aa83258.slice - libcontainer container kubepods-burstable-pod4a41200866e94006003070a63aa83258.slice. Apr 21 10:10:05.722248 kubelet[2192]: E0421 10:10:05.722225 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745511 kubelet[2192]: I0421 10:10:05.745456 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84d7926e3f6ea01b1c7772dc3f09babd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" (UID: \"84d7926e3f6ea01b1c7772dc3f09babd\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745511 kubelet[2192]: I0421 10:10:05.745522 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745511 kubelet[2192]: I0421 10:10:05.745605 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745811 kubelet[2192]: I0421 10:10:05.745635 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745811 kubelet[2192]: I0421 10:10:05.745660 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745811 kubelet[2192]: I0421 10:10:05.745690 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745811 kubelet[2192]: I0421 10:10:05.745712 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a41200866e94006003070a63aa83258-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-a-afac96dda8\" (UID: \"4a41200866e94006003070a63aa83258\") " pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745811 kubelet[2192]: I0421 10:10:05.745747 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84d7926e3f6ea01b1c7772dc3f09babd-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" (UID: \"84d7926e3f6ea01b1c7772dc3f09babd\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.745949 kubelet[2192]: I0421 10:10:05.745774 2192 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84d7926e3f6ea01b1c7772dc3f09babd-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" (UID: \"84d7926e3f6ea01b1c7772dc3f09babd\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.753115 kubelet[2192]: E0421 10:10:05.753057 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.167.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-afac96dda8?timeout=10s\": dial tcp 46.62.167.141:6443: connect: connection refused" interval="400ms" Apr 21 10:10:05.913012 kubelet[2192]: I0421 10:10:05.912620 2192 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.913012 kubelet[2192]: E0421 10:10:05.912981 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.167.141:6443/api/v1/nodes\": dial tcp 46.62.167.141:6443: connect: connection refused" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:05.995406 containerd[1504]: time="2026-04-21T10:10:05.995224075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-a-afac96dda8,Uid:84d7926e3f6ea01b1c7772dc3f09babd,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:06.001556 containerd[1504]: time="2026-04-21T10:10:06.001478632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-a-afac96dda8,Uid:8bb673ff5e79650010684fc302d818ed,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:06.023748 containerd[1504]: time="2026-04-21T10:10:06.023527444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-a-afac96dda8,Uid:4a41200866e94006003070a63aa83258,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:06.153801 kubelet[2192]: E0421 10:10:06.153735 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.167.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-afac96dda8?timeout=10s\": dial tcp 46.62.167.141:6443: connect: connection refused" interval="800ms" Apr 21 10:10:06.316607 kubelet[2192]: I0421 10:10:06.316437 2192 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:06.317103 kubelet[2192]: E0421 10:10:06.316941 2192 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.167.141:6443/api/v1/nodes\": dial tcp 46.62.167.141:6443: connect: connection refused" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:06.481405 kubelet[2192]: E0421 10:10:06.481351 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.167.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-a-afac96dda8&limit=500&resourceVersion=0\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:10:06.506093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221249027.mount: Deactivated successfully. Apr 21 10:10:06.519523 containerd[1504]: time="2026-04-21T10:10:06.519443411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:10:06.520880 containerd[1504]: time="2026-04-21T10:10:06.520808870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:10:06.521381 containerd[1504]: time="2026-04-21T10:10:06.521342051Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:10:06.522732 containerd[1504]: time="2026-04-21T10:10:06.522693289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:10:06.524598 containerd[1504]: time="2026-04-21T10:10:06.523585237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Apr 21 10:10:06.526010 containerd[1504]: time="2026-04-21T10:10:06.525866661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:10:06.526010 containerd[1504]: time="2026-04-21T10:10:06.525918159Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:10:06.529204 containerd[1504]: time="2026-04-21T10:10:06.529164351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:10:06.531115 containerd[1504]: time="2026-04-21T10:10:06.530513315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.973802ms" Apr 21 10:10:06.534213 containerd[1504]: time="2026-04-21T10:10:06.534165657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 510.534757ms" Apr 21 10:10:06.535122 containerd[1504]: time="2026-04-21T10:10:06.535081982Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 539.776064ms" Apr 21 10:10:06.634949 containerd[1504]: time="2026-04-21T10:10:06.634689957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:06.635048 containerd[1504]: time="2026-04-21T10:10:06.634775505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:06.635048 containerd[1504]: time="2026-04-21T10:10:06.634784819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:06.636343 containerd[1504]: time="2026-04-21T10:10:06.634984579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:06.638730 containerd[1504]: time="2026-04-21T10:10:06.638229990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:06.638730 containerd[1504]: time="2026-04-21T10:10:06.638588057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:06.638730 containerd[1504]: time="2026-04-21T10:10:06.638596680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:06.638730 containerd[1504]: time="2026-04-21T10:10:06.638674988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:06.642589 containerd[1504]: time="2026-04-21T10:10:06.641482992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:06.642589 containerd[1504]: time="2026-04-21T10:10:06.641516092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:06.642589 containerd[1504]: time="2026-04-21T10:10:06.641526137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:06.642589 containerd[1504]: time="2026-04-21T10:10:06.641594239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:06.664750 systemd[1]: Started cri-containerd-577f5dfbe6bd1e06c2d2ed47c0e1d15c8bf5e238e94e79f917f2cc048daa375b.scope - libcontainer container 577f5dfbe6bd1e06c2d2ed47c0e1d15c8bf5e238e94e79f917f2cc048daa375b. Apr 21 10:10:06.668858 systemd[1]: Started cri-containerd-a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162.scope - libcontainer container a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162. Apr 21 10:10:06.680717 systemd[1]: Started cri-containerd-f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5.scope - libcontainer container f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5. Apr 21 10:10:06.729536 containerd[1504]: time="2026-04-21T10:10:06.727894722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-a-afac96dda8,Uid:8bb673ff5e79650010684fc302d818ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162\"" Apr 21 10:10:06.737119 containerd[1504]: time="2026-04-21T10:10:06.737078134Z" level=info msg="CreateContainer within sandbox \"a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:10:06.748890 containerd[1504]: time="2026-04-21T10:10:06.748830803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-a-afac96dda8,Uid:84d7926e3f6ea01b1c7772dc3f09babd,Namespace:kube-system,Attempt:0,} returns sandbox id \"577f5dfbe6bd1e06c2d2ed47c0e1d15c8bf5e238e94e79f917f2cc048daa375b\"" Apr 21 10:10:06.750679 containerd[1504]: time="2026-04-21T10:10:06.750655862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-a-afac96dda8,Uid:4a41200866e94006003070a63aa83258,Namespace:kube-system,Attempt:0,} returns sandbox id \"f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5\"" Apr 21 10:10:06.754349 containerd[1504]: time="2026-04-21T10:10:06.754323106Z" level=info msg="CreateContainer within sandbox \"577f5dfbe6bd1e06c2d2ed47c0e1d15c8bf5e238e94e79f917f2cc048daa375b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:10:06.756703 containerd[1504]: time="2026-04-21T10:10:06.756625332Z" level=info msg="CreateContainer within sandbox \"f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:10:06.770224 containerd[1504]: time="2026-04-21T10:10:06.769683750Z" level=info msg="CreateContainer within sandbox \"a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370\"" Apr 21 10:10:06.770361 containerd[1504]: time="2026-04-21T10:10:06.770336860Z" level=info msg="StartContainer for \"02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370\"" Apr 21 10:10:06.775592 containerd[1504]: time="2026-04-21T10:10:06.774468051Z" level=info msg="CreateContainer within sandbox \"577f5dfbe6bd1e06c2d2ed47c0e1d15c8bf5e238e94e79f917f2cc048daa375b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1571c00493b4194f802fca64b3bf138cf90b57c8f36fefde6252ca28827f1710\"" Apr 21 10:10:06.775592 containerd[1504]: time="2026-04-21T10:10:06.774964686Z" level=info msg="StartContainer for \"1571c00493b4194f802fca64b3bf138cf90b57c8f36fefde6252ca28827f1710\"" Apr 21 10:10:06.780966 containerd[1504]: time="2026-04-21T10:10:06.780937971Z" level=info msg="CreateContainer within sandbox \"f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca\"" Apr 21 10:10:06.781485 containerd[1504]: time="2026-04-21T10:10:06.781464010Z" level=info msg="StartContainer for \"4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca\"" Apr 21 10:10:06.801674 systemd[1]: Started cri-containerd-02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370.scope - libcontainer container 02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370. Apr 21 10:10:06.810025 systemd[1]: Started cri-containerd-4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca.scope - libcontainer container 4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca. Apr 21 10:10:06.823708 systemd[1]: Started cri-containerd-1571c00493b4194f802fca64b3bf138cf90b57c8f36fefde6252ca28827f1710.scope - libcontainer container 1571c00493b4194f802fca64b3bf138cf90b57c8f36fefde6252ca28827f1710. Apr 21 10:10:06.824663 kubelet[2192]: E0421 10:10:06.824638 2192 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.167.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.167.141:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:10:06.876397 containerd[1504]: time="2026-04-21T10:10:06.876359713Z" level=info msg="StartContainer for \"02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370\" returns successfully" Apr 21 10:10:06.882459 containerd[1504]: time="2026-04-21T10:10:06.882431416Z" level=info msg="StartContainer for \"1571c00493b4194f802fca64b3bf138cf90b57c8f36fefde6252ca28827f1710\" returns successfully" Apr 21 10:10:06.887651 containerd[1504]: time="2026-04-21T10:10:06.886195475Z" level=info msg="StartContainer for \"4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca\" returns successfully" Apr 21 10:10:07.122155 kubelet[2192]: I0421 10:10:07.122094 2192 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:07.583357 kubelet[2192]: E0421 10:10:07.583324 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:07.585310 kubelet[2192]: E0421 10:10:07.585290 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:07.586629 kubelet[2192]: E0421 10:10:07.586614 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:07.772805 kubelet[2192]: E0421 10:10:07.772762 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:07.839862 kubelet[2192]: I0421 10:10:07.839188 2192 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:07.839862 kubelet[2192]: E0421 10:10:07.839220 2192 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-a-afac96dda8\": node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:07.845461 kubelet[2192]: E0421 10:10:07.845385 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:07.946348 kubelet[2192]: E0421 10:10:07.946263 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.047481 kubelet[2192]: E0421 10:10:08.047416 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.148463 kubelet[2192]: E0421 10:10:08.148303 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.249463 kubelet[2192]: E0421 10:10:08.249399 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.350301 kubelet[2192]: E0421 10:10:08.350244 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.451205 kubelet[2192]: E0421 10:10:08.451059 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.551886 kubelet[2192]: E0421 10:10:08.551841 2192 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-a-afac96dda8\" not found" Apr 21 10:10:08.588060 kubelet[2192]: E0421 10:10:08.588020 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.588450 kubelet[2192]: E0421 10:10:08.588306 2192 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-a-afac96dda8\" not found" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.645660 kubelet[2192]: I0421 10:10:08.645548 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.660683 kubelet[2192]: E0421 10:10:08.660634 2192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.660683 kubelet[2192]: I0421 10:10:08.660676 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.663711 kubelet[2192]: E0421 10:10:08.663658 2192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-a-afac96dda8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.663711 kubelet[2192]: I0421 10:10:08.663677 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:08.665814 kubelet[2192]: E0421 10:10:08.665784 2192 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:09.294451 kubelet[2192]: I0421 10:10:09.294306 2192 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:09.523170 kubelet[2192]: I0421 10:10:09.522488 2192 apiserver.go:52] "Watching apiserver" Apr 21 10:10:09.545021 kubelet[2192]: I0421 10:10:09.544870 2192 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:10:09.892522 systemd[1]: Reloading requested from client PID 2476 ('systemctl') (unit session-7.scope)... Apr 21 10:10:09.892545 systemd[1]: Reloading... Apr 21 10:10:09.985600 zram_generator::config[2516]: No configuration found. Apr 21 10:10:10.116904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:10:10.189056 systemd[1]: Reloading finished in 295 ms. Apr 21 10:10:10.229864 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:10:10.250646 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:10:10.250862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:10:10.257831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:10:10.376761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:10:10.382498 (kubelet)[2567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:10:10.423602 kubelet[2567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:10:10.423602 kubelet[2567]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:10:10.423602 kubelet[2567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:10:10.423602 kubelet[2567]: I0421 10:10:10.423051 2567 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:10:10.428415 kubelet[2567]: I0421 10:10:10.428395 2567 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:10:10.428527 kubelet[2567]: I0421 10:10:10.428519 2567 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:10:10.428754 kubelet[2567]: I0421 10:10:10.428724 2567 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:10:10.429792 kubelet[2567]: I0421 10:10:10.429780 2567 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:10:10.431557 kubelet[2567]: I0421 10:10:10.431539 2567 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:10:10.435726 kubelet[2567]: E0421 10:10:10.435696 2567 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:10:10.435726 kubelet[2567]: I0421 10:10:10.435721 2567 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:10:10.438622 kubelet[2567]: I0421 10:10:10.438609 2567 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:10:10.438832 kubelet[2567]: I0421 10:10:10.438809 2567 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:10:10.438930 kubelet[2567]: I0421 10:10:10.438826 2567 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-a-afac96dda8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:10:10.438930 kubelet[2567]: I0421 10:10:10.438929 2567 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:10:10.439012 kubelet[2567]: I0421 10:10:10.438936 2567 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:10:10.439012 kubelet[2567]: I0421 10:10:10.438974 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:10:10.439137 kubelet[2567]: I0421 10:10:10.439112 2567 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:10:10.439450 kubelet[2567]: I0421 10:10:10.439139 2567 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:10:10.439450 kubelet[2567]: I0421 10:10:10.439158 2567 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:10:10.439450 kubelet[2567]: I0421 10:10:10.439170 2567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:10:10.440634 kubelet[2567]: I0421 10:10:10.440563 2567 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:10:10.440956 kubelet[2567]: I0421 10:10:10.440943 2567 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:10:10.444414 kubelet[2567]: I0421 10:10:10.444308 2567 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:10:10.444584 kubelet[2567]: I0421 10:10:10.444494 2567 server.go:1289] "Started kubelet" Apr 21 10:10:10.447192 kubelet[2567]: I0421 10:10:10.447173 2567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:10:10.457481 kubelet[2567]: I0421 10:10:10.457101 2567 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:10:10.458300 kubelet[2567]: I0421 10:10:10.458289 2567 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:10:10.461227 kubelet[2567]: I0421 10:10:10.461190 2567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:10:10.461442 kubelet[2567]: I0421 10:10:10.461432 2567 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:10:10.462317 kubelet[2567]: I0421 10:10:10.462304 2567 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:10:10.464075 kubelet[2567]: I0421 10:10:10.464066 2567 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:10:10.465019 kubelet[2567]: I0421 10:10:10.464618 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:10:10.465324 kubelet[2567]: I0421 10:10:10.465303 2567 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:10:10.466232 kubelet[2567]: I0421 10:10:10.466223 2567 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:10:10.466918 kubelet[2567]: I0421 10:10:10.466908 2567 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:10:10.467137 kubelet[2567]: I0421 10:10:10.466976 2567 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:10:10.467137 kubelet[2567]: I0421 10:10:10.466992 2567 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:10:10.467137 kubelet[2567]: I0421 10:10:10.466997 2567 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:10:10.467137 kubelet[2567]: E0421 10:10:10.467033 2567 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:10:10.474839 kubelet[2567]: I0421 10:10:10.474815 2567 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:10:10.475194 kubelet[2567]: I0421 10:10:10.475179 2567 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:10:10.476276 kubelet[2567]: E0421 10:10:10.475865 2567 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:10:10.477200 kubelet[2567]: I0421 10:10:10.477189 2567 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:10:10.506917 kubelet[2567]: I0421 10:10:10.506885 2567 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:10:10.506917 kubelet[2567]: I0421 10:10:10.506904 2567 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:10:10.506917 kubelet[2567]: I0421 10:10:10.506921 2567 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:10:10.507073 kubelet[2567]: I0421 10:10:10.507041 2567 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:10:10.507073 kubelet[2567]: I0421 10:10:10.507054 2567 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:10:10.507073 kubelet[2567]: I0421 10:10:10.507068 2567 policy_none.go:49] "None policy: Start" Apr 21 10:10:10.507135 kubelet[2567]: I0421 10:10:10.507076 2567 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:10:10.507135 kubelet[2567]: I0421 10:10:10.507085 2567 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:10:10.507184 kubelet[2567]: I0421 10:10:10.507174 2567 state_mem.go:75] "Updated machine memory state" Apr 21 10:10:10.511741 kubelet[2567]: E0421 10:10:10.510765 2567 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:10:10.511741 kubelet[2567]: I0421 10:10:10.510897 2567 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:10:10.511741 kubelet[2567]: I0421 10:10:10.510905 2567 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:10:10.511741 kubelet[2567]: I0421 10:10:10.511271 2567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:10:10.513273 kubelet[2567]: E0421 10:10:10.513240 2567 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:10:10.568388 kubelet[2567]: I0421 10:10:10.568344 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.568602 kubelet[2567]: I0421 10:10:10.568344 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.568678 kubelet[2567]: I0421 10:10:10.568453 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.576866 kubelet[2567]: E0421 10:10:10.576835 2567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.618688 kubelet[2567]: I0421 10:10:10.618643 2567 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.626513 kubelet[2567]: I0421 10:10:10.626470 2567 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.626660 kubelet[2567]: I0421 10:10:10.626553 2567 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667743 kubelet[2567]: I0421 10:10:10.667708 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84d7926e3f6ea01b1c7772dc3f09babd-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" (UID: \"84d7926e3f6ea01b1c7772dc3f09babd\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667743 kubelet[2567]: I0421 10:10:10.667742 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84d7926e3f6ea01b1c7772dc3f09babd-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" (UID: \"84d7926e3f6ea01b1c7772dc3f09babd\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667951 kubelet[2567]: I0421 10:10:10.667761 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84d7926e3f6ea01b1c7772dc3f09babd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-a-afac96dda8\" (UID: \"84d7926e3f6ea01b1c7772dc3f09babd\") " pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667951 kubelet[2567]: I0421 10:10:10.667780 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667951 kubelet[2567]: I0421 10:10:10.667797 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667951 kubelet[2567]: I0421 10:10:10.667814 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.667951 kubelet[2567]: I0421 10:10:10.667833 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.668164 kubelet[2567]: I0421 10:10:10.667853 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8bb673ff5e79650010684fc302d818ed-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" (UID: \"8bb673ff5e79650010684fc302d818ed\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:10.668164 kubelet[2567]: I0421 10:10:10.667869 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a41200866e94006003070a63aa83258-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-a-afac96dda8\" (UID: \"4a41200866e94006003070a63aa83258\") " pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:11.440970 kubelet[2567]: I0421 10:10:11.440763 2567 apiserver.go:52] "Watching apiserver" Apr 21 10:10:11.465717 kubelet[2567]: I0421 10:10:11.465653 2567 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:10:11.490407 kubelet[2567]: I0421 10:10:11.490186 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:11.492209 kubelet[2567]: I0421 10:10:11.492071 2567 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:11.498678 kubelet[2567]: E0421 10:10:11.498647 2567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-a-afac96dda8\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:11.500124 kubelet[2567]: E0421 10:10:11.500093 2567 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-a-afac96dda8\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" Apr 21 10:10:11.509374 kubelet[2567]: I0421 10:10:11.509327 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-a-afac96dda8" podStartSLOduration=1.50929732 podStartE2EDuration="1.50929732s" podCreationTimestamp="2026-04-21 10:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:11.508844821 +0000 UTC m=+1.118354895" watchObservedRunningTime="2026-04-21 10:10:11.50929732 +0000 UTC m=+1.118807384" Apr 21 10:10:11.525331 kubelet[2567]: I0421 10:10:11.525285 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" podStartSLOduration=1.5252595869999999 podStartE2EDuration="1.525259587s" podCreationTimestamp="2026-04-21 10:10:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:11.524201398 +0000 UTC m=+1.133711472" watchObservedRunningTime="2026-04-21 10:10:11.525259587 +0000 UTC m=+1.134769651" Apr 21 10:10:11.525471 kubelet[2567]: I0421 10:10:11.525342 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-a-afac96dda8" podStartSLOduration=2.5253399869999997 podStartE2EDuration="2.525339987s" podCreationTimestamp="2026-04-21 10:10:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:11.517038688 +0000 UTC m=+1.126548762" watchObservedRunningTime="2026-04-21 10:10:11.525339987 +0000 UTC m=+1.134850061" Apr 21 10:10:12.261071 systemd[1]: Started sshd@7-46.62.167.141:22-78.128.112.74:50880.service - OpenSSH per-connection server daemon (78.128.112.74:50880). Apr 21 10:10:12.587626 sshd[2615]: Invalid user user from 78.128.112.74 port 50880 Apr 21 10:10:12.657616 sshd[2615]: Connection closed by invalid user user 78.128.112.74 port 50880 [preauth] Apr 21 10:10:12.659175 systemd[1]: sshd@7-46.62.167.141:22-78.128.112.74:50880.service: Deactivated successfully. Apr 21 10:10:15.001751 kubelet[2567]: I0421 10:10:15.001632 2567 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:10:15.002402 containerd[1504]: time="2026-04-21T10:10:15.002339354Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:10:15.004958 kubelet[2567]: I0421 10:10:15.002797 2567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:10:15.734505 systemd[1]: Created slice kubepods-besteffort-pod04a41868_6e2d_438e_bdc9_778072ecb99e.slice - libcontainer container kubepods-besteffort-pod04a41868_6e2d_438e_bdc9_778072ecb99e.slice. Apr 21 10:10:15.801287 kubelet[2567]: I0421 10:10:15.801041 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c75hs\" (UniqueName: \"kubernetes.io/projected/04a41868-6e2d-438e-bdc9-778072ecb99e-kube-api-access-c75hs\") pod \"kube-proxy-5tf6b\" (UID: \"04a41868-6e2d-438e-bdc9-778072ecb99e\") " pod="kube-system/kube-proxy-5tf6b" Apr 21 10:10:15.801287 kubelet[2567]: I0421 10:10:15.801102 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/04a41868-6e2d-438e-bdc9-778072ecb99e-lib-modules\") pod \"kube-proxy-5tf6b\" (UID: \"04a41868-6e2d-438e-bdc9-778072ecb99e\") " pod="kube-system/kube-proxy-5tf6b" Apr 21 10:10:15.801287 kubelet[2567]: I0421 10:10:15.801150 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/04a41868-6e2d-438e-bdc9-778072ecb99e-kube-proxy\") pod \"kube-proxy-5tf6b\" (UID: \"04a41868-6e2d-438e-bdc9-778072ecb99e\") " pod="kube-system/kube-proxy-5tf6b" Apr 21 10:10:15.801287 kubelet[2567]: I0421 10:10:15.801172 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/04a41868-6e2d-438e-bdc9-778072ecb99e-xtables-lock\") pod \"kube-proxy-5tf6b\" (UID: \"04a41868-6e2d-438e-bdc9-778072ecb99e\") " pod="kube-system/kube-proxy-5tf6b" Apr 21 10:10:15.993073 systemd[1]: Created slice kubepods-besteffort-pod1477713e_818f_47b5_beaa_604d0169758e.slice - libcontainer container kubepods-besteffort-pod1477713e_818f_47b5_beaa_604d0169758e.slice. Apr 21 10:10:16.001539 kubelet[2567]: I0421 10:10:16.001509 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg27q\" (UniqueName: \"kubernetes.io/projected/1477713e-818f-47b5-beaa-604d0169758e-kube-api-access-vg27q\") pod \"tigera-operator-6bf85f8dd-7l65b\" (UID: \"1477713e-818f-47b5-beaa-604d0169758e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-7l65b" Apr 21 10:10:16.001539 kubelet[2567]: I0421 10:10:16.001538 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1477713e-818f-47b5-beaa-604d0169758e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-7l65b\" (UID: \"1477713e-818f-47b5-beaa-604d0169758e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-7l65b" Apr 21 10:10:16.046367 containerd[1504]: time="2026-04-21T10:10:16.046322498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5tf6b,Uid:04a41868-6e2d-438e-bdc9-778072ecb99e,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:16.079613 containerd[1504]: time="2026-04-21T10:10:16.079257308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:16.079613 containerd[1504]: time="2026-04-21T10:10:16.079326692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:16.079613 containerd[1504]: time="2026-04-21T10:10:16.079440413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:16.079613 containerd[1504]: time="2026-04-21T10:10:16.079517128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:16.097701 systemd[1]: Started cri-containerd-5e2484f6cdb8a6aa5753b1746aeba48359c4f611f7774bfec628dccba97514eb.scope - libcontainer container 5e2484f6cdb8a6aa5753b1746aeba48359c4f611f7774bfec628dccba97514eb. Apr 21 10:10:16.123687 containerd[1504]: time="2026-04-21T10:10:16.123634871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5tf6b,Uid:04a41868-6e2d-438e-bdc9-778072ecb99e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e2484f6cdb8a6aa5753b1746aeba48359c4f611f7774bfec628dccba97514eb\"" Apr 21 10:10:16.130643 containerd[1504]: time="2026-04-21T10:10:16.130616770Z" level=info msg="CreateContainer within sandbox \"5e2484f6cdb8a6aa5753b1746aeba48359c4f611f7774bfec628dccba97514eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:10:16.151392 containerd[1504]: time="2026-04-21T10:10:16.151362105Z" level=info msg="CreateContainer within sandbox \"5e2484f6cdb8a6aa5753b1746aeba48359c4f611f7774bfec628dccba97514eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d847c79c10a96779f9af1704f79a012a0059ac0ff1fce72ad12e53f316c932d\"" Apr 21 10:10:16.152445 containerd[1504]: time="2026-04-21T10:10:16.152381886Z" level=info msg="StartContainer for \"2d847c79c10a96779f9af1704f79a012a0059ac0ff1fce72ad12e53f316c932d\"" Apr 21 10:10:16.177685 systemd[1]: Started cri-containerd-2d847c79c10a96779f9af1704f79a012a0059ac0ff1fce72ad12e53f316c932d.scope - libcontainer container 2d847c79c10a96779f9af1704f79a012a0059ac0ff1fce72ad12e53f316c932d. Apr 21 10:10:16.202415 containerd[1504]: time="2026-04-21T10:10:16.202326895Z" level=info msg="StartContainer for \"2d847c79c10a96779f9af1704f79a012a0059ac0ff1fce72ad12e53f316c932d\" returns successfully" Apr 21 10:10:16.296244 containerd[1504]: time="2026-04-21T10:10:16.296097739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-7l65b,Uid:1477713e-818f-47b5-beaa-604d0169758e,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:10:16.331873 containerd[1504]: time="2026-04-21T10:10:16.331693743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:16.331873 containerd[1504]: time="2026-04-21T10:10:16.331742436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:16.331873 containerd[1504]: time="2026-04-21T10:10:16.331752601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:16.331873 containerd[1504]: time="2026-04-21T10:10:16.331834694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:16.354675 systemd[1]: Started cri-containerd-5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179.scope - libcontainer container 5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179. Apr 21 10:10:16.385774 containerd[1504]: time="2026-04-21T10:10:16.385723913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-7l65b,Uid:1477713e-818f-47b5-beaa-604d0169758e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179\"" Apr 21 10:10:16.388763 containerd[1504]: time="2026-04-21T10:10:16.388067420Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:10:16.511687 kubelet[2567]: I0421 10:10:16.511386 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5tf6b" podStartSLOduration=1.5113711589999999 podStartE2EDuration="1.511371159s" podCreationTimestamp="2026-04-21 10:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:16.510689856 +0000 UTC m=+6.120199920" watchObservedRunningTime="2026-04-21 10:10:16.511371159 +0000 UTC m=+6.120881233" Apr 21 10:10:18.054947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682456156.mount: Deactivated successfully. Apr 21 10:10:18.732054 containerd[1504]: time="2026-04-21T10:10:18.731489646Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:18.732054 containerd[1504]: time="2026-04-21T10:10:18.732025141Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:10:18.732888 containerd[1504]: time="2026-04-21T10:10:18.732847462Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:18.734361 containerd[1504]: time="2026-04-21T10:10:18.734345185Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:18.735191 containerd[1504]: time="2026-04-21T10:10:18.734815633Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.346344306s" Apr 21 10:10:18.735191 containerd[1504]: time="2026-04-21T10:10:18.734839609Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:10:18.738063 containerd[1504]: time="2026-04-21T10:10:18.738047351Z" level=info msg="CreateContainer within sandbox \"5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:10:18.760408 containerd[1504]: time="2026-04-21T10:10:18.760373962Z" level=info msg="CreateContainer within sandbox \"5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef\"" Apr 21 10:10:18.760856 containerd[1504]: time="2026-04-21T10:10:18.760837249Z" level=info msg="StartContainer for \"f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef\"" Apr 21 10:10:18.786682 systemd[1]: Started cri-containerd-f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef.scope - libcontainer container f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef. Apr 21 10:10:18.810409 containerd[1504]: time="2026-04-21T10:10:18.810371246Z" level=info msg="StartContainer for \"f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef\" returns successfully" Apr 21 10:10:20.118680 update_engine[1480]: I20260421 10:10:20.118608 1480 update_attempter.cc:509] Updating boot flags... Apr 21 10:10:20.179644 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2920) Apr 21 10:10:20.278111 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2922) Apr 21 10:10:20.381596 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2922) Apr 21 10:10:20.416941 kubelet[2567]: I0421 10:10:20.411923 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-7l65b" podStartSLOduration=3.063069448 podStartE2EDuration="5.411560377s" podCreationTimestamp="2026-04-21 10:10:15 +0000 UTC" firstStartedPulling="2026-04-21 10:10:16.386970754 +0000 UTC m=+5.996480818" lastFinishedPulling="2026-04-21 10:10:18.735461683 +0000 UTC m=+8.344971747" observedRunningTime="2026-04-21 10:10:19.521318985 +0000 UTC m=+9.130829089" watchObservedRunningTime="2026-04-21 10:10:20.411560377 +0000 UTC m=+10.021070451" Apr 21 10:10:24.174015 sudo[1688]: pam_unix(sudo:session): session closed for user root Apr 21 10:10:24.205483 sshd[1685]: pam_unix(sshd:session): session closed for user core Apr 21 10:10:24.211974 systemd[1]: sshd@6-46.62.167.141:22-50.85.169.122:43612.service: Deactivated successfully. Apr 21 10:10:24.215098 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:10:24.215597 systemd[1]: session-7.scope: Consumed 4.782s CPU time, 158.2M memory peak, 0B memory swap peak. Apr 21 10:10:24.218908 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:10:24.220768 systemd-logind[1477]: Removed session 7. Apr 21 10:10:24.868813 systemd[1]: Created slice kubepods-besteffort-pode9b18869_89b0_47e7_8416_655f904b410f.slice - libcontainer container kubepods-besteffort-pode9b18869_89b0_47e7_8416_655f904b410f.slice. Apr 21 10:10:24.959793 kubelet[2567]: I0421 10:10:24.959041 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e9b18869-89b0-47e7-8416-655f904b410f-typha-certs\") pod \"calico-typha-69d5d9884c-wnrdw\" (UID: \"e9b18869-89b0-47e7-8416-655f904b410f\") " pod="calico-system/calico-typha-69d5d9884c-wnrdw" Apr 21 10:10:24.959793 kubelet[2567]: I0421 10:10:24.959080 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxgw5\" (UniqueName: \"kubernetes.io/projected/e9b18869-89b0-47e7-8416-655f904b410f-kube-api-access-rxgw5\") pod \"calico-typha-69d5d9884c-wnrdw\" (UID: \"e9b18869-89b0-47e7-8416-655f904b410f\") " pod="calico-system/calico-typha-69d5d9884c-wnrdw" Apr 21 10:10:24.959793 kubelet[2567]: I0421 10:10:24.959095 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9b18869-89b0-47e7-8416-655f904b410f-tigera-ca-bundle\") pod \"calico-typha-69d5d9884c-wnrdw\" (UID: \"e9b18869-89b0-47e7-8416-655f904b410f\") " pod="calico-system/calico-typha-69d5d9884c-wnrdw" Apr 21 10:10:24.970931 systemd[1]: Created slice kubepods-besteffort-pod43eeb58b_dda0_4ec9_a57d_e18e3b900300.slice - libcontainer container kubepods-besteffort-pod43eeb58b_dda0_4ec9_a57d_e18e3b900300.slice. Apr 21 10:10:25.059458 kubelet[2567]: I0421 10:10:25.059402 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-var-lib-calico\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059458 kubelet[2567]: I0421 10:10:25.059438 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-cni-bin-dir\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059458 kubelet[2567]: I0421 10:10:25.059452 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-flexvol-driver-host\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059458 kubelet[2567]: I0421 10:10:25.059464 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-policysync\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059458 kubelet[2567]: I0421 10:10:25.059475 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-sys-fs\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059743 kubelet[2567]: I0421 10:10:25.059495 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-cni-log-dir\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059743 kubelet[2567]: I0421 10:10:25.059506 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-lib-modules\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059743 kubelet[2567]: I0421 10:10:25.059517 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/43eeb58b-dda0-4ec9-a57d-e18e3b900300-node-certs\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059743 kubelet[2567]: I0421 10:10:25.059527 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-nodeproc\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059743 kubelet[2567]: I0421 10:10:25.059540 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-xtables-lock\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.059845 kubelet[2567]: I0421 10:10:25.059558 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-cni-net-dir\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.060496 kubelet[2567]: I0421 10:10:25.060463 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43eeb58b-dda0-4ec9-a57d-e18e3b900300-tigera-ca-bundle\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.060496 kubelet[2567]: I0421 10:10:25.060486 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-var-run-calico\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.060557 kubelet[2567]: I0421 10:10:25.060497 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94vvd\" (UniqueName: \"kubernetes.io/projected/43eeb58b-dda0-4ec9-a57d-e18e3b900300-kube-api-access-94vvd\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.060557 kubelet[2567]: I0421 10:10:25.060518 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/43eeb58b-dda0-4ec9-a57d-e18e3b900300-bpffs\") pod \"calico-node-l9qqk\" (UID: \"43eeb58b-dda0-4ec9-a57d-e18e3b900300\") " pod="calico-system/calico-node-l9qqk" Apr 21 10:10:25.085675 kubelet[2567]: E0421 10:10:25.084893 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:25.160935 kubelet[2567]: I0421 10:10:25.160864 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5cda479-2770-45d5-b204-a8ebaf013eb6-registration-dir\") pod \"csi-node-driver-7f724\" (UID: \"a5cda479-2770-45d5-b204-a8ebaf013eb6\") " pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:25.160935 kubelet[2567]: I0421 10:10:25.160936 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5cda479-2770-45d5-b204-a8ebaf013eb6-kubelet-dir\") pod \"csi-node-driver-7f724\" (UID: \"a5cda479-2770-45d5-b204-a8ebaf013eb6\") " pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:25.160935 kubelet[2567]: I0421 10:10:25.160953 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5cda479-2770-45d5-b204-a8ebaf013eb6-varrun\") pod \"csi-node-driver-7f724\" (UID: \"a5cda479-2770-45d5-b204-a8ebaf013eb6\") " pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:25.161174 kubelet[2567]: I0421 10:10:25.160995 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5cda479-2770-45d5-b204-a8ebaf013eb6-socket-dir\") pod \"csi-node-driver-7f724\" (UID: \"a5cda479-2770-45d5-b204-a8ebaf013eb6\") " pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:25.161174 kubelet[2567]: I0421 10:10:25.161008 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srnct\" (UniqueName: \"kubernetes.io/projected/a5cda479-2770-45d5-b204-a8ebaf013eb6-kube-api-access-srnct\") pod \"csi-node-driver-7f724\" (UID: \"a5cda479-2770-45d5-b204-a8ebaf013eb6\") " pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:25.163236 kubelet[2567]: E0421 10:10:25.162947 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.163236 kubelet[2567]: W0421 10:10:25.162967 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.163236 kubelet[2567]: E0421 10:10:25.162985 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.163475 kubelet[2567]: E0421 10:10:25.163410 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.163475 kubelet[2567]: W0421 10:10:25.163417 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.163475 kubelet[2567]: E0421 10:10:25.163427 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.164154 kubelet[2567]: E0421 10:10:25.164040 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.164154 kubelet[2567]: W0421 10:10:25.164072 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.164154 kubelet[2567]: E0421 10:10:25.164081 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.164690 kubelet[2567]: E0421 10:10:25.164531 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.166582 kubelet[2567]: W0421 10:10:25.164778 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.166582 kubelet[2567]: E0421 10:10:25.164789 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.167345 kubelet[2567]: E0421 10:10:25.167334 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.167497 kubelet[2567]: W0421 10:10:25.167420 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.167497 kubelet[2567]: E0421 10:10:25.167433 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.170532 kubelet[2567]: E0421 10:10:25.170420 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.170532 kubelet[2567]: W0421 10:10:25.170433 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.170532 kubelet[2567]: E0421 10:10:25.170455 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.170828 kubelet[2567]: E0421 10:10:25.170818 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.170972 kubelet[2567]: W0421 10:10:25.170877 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.170972 kubelet[2567]: E0421 10:10:25.170887 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.171330 kubelet[2567]: E0421 10:10:25.171320 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.171453 kubelet[2567]: W0421 10:10:25.171376 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.171453 kubelet[2567]: E0421 10:10:25.171385 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.173796 kubelet[2567]: E0421 10:10:25.173784 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.173960 kubelet[2567]: W0421 10:10:25.173950 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.174060 kubelet[2567]: E0421 10:10:25.174052 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.174471 kubelet[2567]: E0421 10:10:25.174433 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.174471 kubelet[2567]: W0421 10:10:25.174455 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.174471 kubelet[2567]: E0421 10:10:25.174472 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.175500 kubelet[2567]: E0421 10:10:25.175264 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.175500 kubelet[2567]: W0421 10:10:25.175277 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.175500 kubelet[2567]: E0421 10:10:25.175287 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.175969 kubelet[2567]: E0421 10:10:25.175944 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.175969 kubelet[2567]: W0421 10:10:25.175957 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.175969 kubelet[2567]: E0421 10:10:25.175966 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.178507 containerd[1504]: time="2026-04-21T10:10:25.178457754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d5d9884c-wnrdw,Uid:e9b18869-89b0-47e7-8416-655f904b410f,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:25.210187 containerd[1504]: time="2026-04-21T10:10:25.209860505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:25.210187 containerd[1504]: time="2026-04-21T10:10:25.209976879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:25.210187 containerd[1504]: time="2026-04-21T10:10:25.210022386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:25.210502 containerd[1504]: time="2026-04-21T10:10:25.210138940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:25.231755 systemd[1]: Started cri-containerd-39a64ae5fa33c6b5a6ae0adea0315a2b4987645edbf0c4601e254ae0906aeb95.scope - libcontainer container 39a64ae5fa33c6b5a6ae0adea0315a2b4987645edbf0c4601e254ae0906aeb95. Apr 21 10:10:25.263159 kubelet[2567]: E0421 10:10:25.262501 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.263159 kubelet[2567]: W0421 10:10:25.262520 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.263159 kubelet[2567]: E0421 10:10:25.262538 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.263159 kubelet[2567]: E0421 10:10:25.262871 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.263159 kubelet[2567]: W0421 10:10:25.262879 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.263159 kubelet[2567]: E0421 10:10:25.262888 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.263680 kubelet[2567]: E0421 10:10:25.263425 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.263680 kubelet[2567]: W0421 10:10:25.263434 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.263680 kubelet[2567]: E0421 10:10:25.263468 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.264215 kubelet[2567]: E0421 10:10:25.264203 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.264282 kubelet[2567]: W0421 10:10:25.264271 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.264347 kubelet[2567]: E0421 10:10:25.264321 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.264858 kubelet[2567]: E0421 10:10:25.264846 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.265027 kubelet[2567]: W0421 10:10:25.264922 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.265027 kubelet[2567]: E0421 10:10:25.264934 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.265523 kubelet[2567]: E0421 10:10:25.265410 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.265523 kubelet[2567]: W0421 10:10:25.265421 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.265523 kubelet[2567]: E0421 10:10:25.265431 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.266250 kubelet[2567]: E0421 10:10:25.266009 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.266250 kubelet[2567]: W0421 10:10:25.266020 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.266250 kubelet[2567]: E0421 10:10:25.266029 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.267445 kubelet[2567]: E0421 10:10:25.266979 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.267445 kubelet[2567]: W0421 10:10:25.267001 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.267445 kubelet[2567]: E0421 10:10:25.267011 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.267445 kubelet[2567]: E0421 10:10:25.267287 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.267445 kubelet[2567]: W0421 10:10:25.267295 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.267445 kubelet[2567]: E0421 10:10:25.267303 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.267731 kubelet[2567]: E0421 10:10:25.267721 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.267782 kubelet[2567]: W0421 10:10:25.267773 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.267826 kubelet[2567]: E0421 10:10:25.267813 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.268274 kubelet[2567]: E0421 10:10:25.268263 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.268323 kubelet[2567]: W0421 10:10:25.268314 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.268371 kubelet[2567]: E0421 10:10:25.268361 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.268825 kubelet[2567]: E0421 10:10:25.268814 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.268925 kubelet[2567]: W0421 10:10:25.268902 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.268925 kubelet[2567]: E0421 10:10:25.268913 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.269383 kubelet[2567]: E0421 10:10:25.269352 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.269383 kubelet[2567]: W0421 10:10:25.269363 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.269383 kubelet[2567]: E0421 10:10:25.269371 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.271800 kubelet[2567]: E0421 10:10:25.270312 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.271800 kubelet[2567]: W0421 10:10:25.270323 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.271800 kubelet[2567]: E0421 10:10:25.270333 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.272046 kubelet[2567]: E0421 10:10:25.271929 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.272046 kubelet[2567]: W0421 10:10:25.271941 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.272046 kubelet[2567]: E0421 10:10:25.271952 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.272128 containerd[1504]: time="2026-04-21T10:10:25.272041056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d5d9884c-wnrdw,Uid:e9b18869-89b0-47e7-8416-655f904b410f,Namespace:calico-system,Attempt:0,} returns sandbox id \"39a64ae5fa33c6b5a6ae0adea0315a2b4987645edbf0c4601e254ae0906aeb95\"" Apr 21 10:10:25.272962 kubelet[2567]: E0421 10:10:25.272857 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.272962 kubelet[2567]: W0421 10:10:25.272869 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.272962 kubelet[2567]: E0421 10:10:25.272878 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.273859 containerd[1504]: time="2026-04-21T10:10:25.273512810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9qqk,Uid:43eeb58b-dda0-4ec9-a57d-e18e3b900300,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:25.273859 containerd[1504]: time="2026-04-21T10:10:25.273753488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:10:25.273980 kubelet[2567]: E0421 10:10:25.273970 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.274049 kubelet[2567]: W0421 10:10:25.274039 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.274094 kubelet[2567]: E0421 10:10:25.274078 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.274343 kubelet[2567]: E0421 10:10:25.274332 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.274398 kubelet[2567]: W0421 10:10:25.274388 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.274430 kubelet[2567]: E0421 10:10:25.274423 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.274713 kubelet[2567]: E0421 10:10:25.274704 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.274761 kubelet[2567]: W0421 10:10:25.274754 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.274869 kubelet[2567]: E0421 10:10:25.274787 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.275068 kubelet[2567]: E0421 10:10:25.275059 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.275130 kubelet[2567]: W0421 10:10:25.275105 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.275130 kubelet[2567]: E0421 10:10:25.275114 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.275460 kubelet[2567]: E0421 10:10:25.275365 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.275460 kubelet[2567]: W0421 10:10:25.275373 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.275460 kubelet[2567]: E0421 10:10:25.275379 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.275688 kubelet[2567]: E0421 10:10:25.275680 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.275962 kubelet[2567]: W0421 10:10:25.275729 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.275962 kubelet[2567]: E0421 10:10:25.275741 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.276182 kubelet[2567]: E0421 10:10:25.276173 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.276227 kubelet[2567]: W0421 10:10:25.276220 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.276260 kubelet[2567]: E0421 10:10:25.276253 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.276508 kubelet[2567]: E0421 10:10:25.276498 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.276560 kubelet[2567]: W0421 10:10:25.276552 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.276630 kubelet[2567]: E0421 10:10:25.276620 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.276910 kubelet[2567]: E0421 10:10:25.276883 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.276910 kubelet[2567]: W0421 10:10:25.276890 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.276910 kubelet[2567]: E0421 10:10:25.276897 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.286072 kubelet[2567]: E0421 10:10:25.286055 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:25.286187 kubelet[2567]: W0421 10:10:25.286154 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:25.286187 kubelet[2567]: E0421 10:10:25.286168 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:25.301162 containerd[1504]: time="2026-04-21T10:10:25.300928999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:25.301162 containerd[1504]: time="2026-04-21T10:10:25.300975468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:25.301162 containerd[1504]: time="2026-04-21T10:10:25.300986475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:25.301403 containerd[1504]: time="2026-04-21T10:10:25.301327323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:25.324784 systemd[1]: Started cri-containerd-0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a.scope - libcontainer container 0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a. Apr 21 10:10:25.351346 containerd[1504]: time="2026-04-21T10:10:25.351296034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9qqk,Uid:43eeb58b-dda0-4ec9-a57d-e18e3b900300,Namespace:calico-system,Attempt:0,} returns sandbox id \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\"" Apr 21 10:10:26.468858 kubelet[2567]: E0421 10:10:26.467883 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:27.047985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329666606.mount: Deactivated successfully. Apr 21 10:10:27.396718 containerd[1504]: time="2026-04-21T10:10:27.396677694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:27.397794 containerd[1504]: time="2026-04-21T10:10:27.397764715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 21 10:10:27.398576 containerd[1504]: time="2026-04-21T10:10:27.398530878Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:27.400592 containerd[1504]: time="2026-04-21T10:10:27.400542318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:27.401138 containerd[1504]: time="2026-04-21T10:10:27.401114732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.127344019s" Apr 21 10:10:27.401180 containerd[1504]: time="2026-04-21T10:10:27.401139619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:10:27.402608 containerd[1504]: time="2026-04-21T10:10:27.401924900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:10:27.416264 containerd[1504]: time="2026-04-21T10:10:27.416165853Z" level=info msg="CreateContainer within sandbox \"39a64ae5fa33c6b5a6ae0adea0315a2b4987645edbf0c4601e254ae0906aeb95\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:10:27.436703 containerd[1504]: time="2026-04-21T10:10:27.436665561Z" level=info msg="CreateContainer within sandbox \"39a64ae5fa33c6b5a6ae0adea0315a2b4987645edbf0c4601e254ae0906aeb95\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bec523a48a0bd4194f0f2b94edf7128df1051b474029661713513ca1fdd93cee\"" Apr 21 10:10:27.437217 containerd[1504]: time="2026-04-21T10:10:27.437191215Z" level=info msg="StartContainer for \"bec523a48a0bd4194f0f2b94edf7128df1051b474029661713513ca1fdd93cee\"" Apr 21 10:10:27.461686 systemd[1]: Started cri-containerd-bec523a48a0bd4194f0f2b94edf7128df1051b474029661713513ca1fdd93cee.scope - libcontainer container bec523a48a0bd4194f0f2b94edf7128df1051b474029661713513ca1fdd93cee. Apr 21 10:10:27.502114 containerd[1504]: time="2026-04-21T10:10:27.502076609Z" level=info msg="StartContainer for \"bec523a48a0bd4194f0f2b94edf7128df1051b474029661713513ca1fdd93cee\" returns successfully" Apr 21 10:10:27.536146 kubelet[2567]: I0421 10:10:27.536099 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69d5d9884c-wnrdw" podStartSLOduration=1.407748138 podStartE2EDuration="3.536086679s" podCreationTimestamp="2026-04-21 10:10:24 +0000 UTC" firstStartedPulling="2026-04-21 10:10:25.27349984 +0000 UTC m=+14.883009904" lastFinishedPulling="2026-04-21 10:10:27.401838371 +0000 UTC m=+17.011348445" observedRunningTime="2026-04-21 10:10:27.534889252 +0000 UTC m=+17.144399326" watchObservedRunningTime="2026-04-21 10:10:27.536086679 +0000 UTC m=+17.145596753" Apr 21 10:10:27.577506 kubelet[2567]: E0421 10:10:27.577466 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.577506 kubelet[2567]: W0421 10:10:27.577503 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.577663 kubelet[2567]: E0421 10:10:27.577524 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.578045 kubelet[2567]: E0421 10:10:27.577762 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.578045 kubelet[2567]: W0421 10:10:27.577771 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.578045 kubelet[2567]: E0421 10:10:27.577777 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.578045 kubelet[2567]: E0421 10:10:27.578028 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.578045 kubelet[2567]: W0421 10:10:27.578034 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.578161 kubelet[2567]: E0421 10:10:27.578054 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.578312 kubelet[2567]: E0421 10:10:27.578283 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.578312 kubelet[2567]: W0421 10:10:27.578293 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.578312 kubelet[2567]: E0421 10:10:27.578300 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.578930 kubelet[2567]: E0421 10:10:27.578534 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.578930 kubelet[2567]: W0421 10:10:27.578543 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.578930 kubelet[2567]: E0421 10:10:27.578549 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.578930 kubelet[2567]: E0421 10:10:27.578879 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.578930 kubelet[2567]: W0421 10:10:27.578885 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.578930 kubelet[2567]: E0421 10:10:27.578892 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.579313 kubelet[2567]: E0421 10:10:27.579297 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.579313 kubelet[2567]: W0421 10:10:27.579308 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.579357 kubelet[2567]: E0421 10:10:27.579316 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.580028 kubelet[2567]: E0421 10:10:27.580011 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.580028 kubelet[2567]: W0421 10:10:27.580021 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.580028 kubelet[2567]: E0421 10:10:27.580029 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.580306 kubelet[2567]: E0421 10:10:27.580288 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.580306 kubelet[2567]: W0421 10:10:27.580298 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.580306 kubelet[2567]: E0421 10:10:27.580306 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.581596 kubelet[2567]: E0421 10:10:27.580585 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.581596 kubelet[2567]: W0421 10:10:27.580593 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.581596 kubelet[2567]: E0421 10:10:27.580608 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.581596 kubelet[2567]: E0421 10:10:27.580919 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.581596 kubelet[2567]: W0421 10:10:27.580926 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.581596 kubelet[2567]: E0421 10:10:27.580933 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.581596 kubelet[2567]: E0421 10:10:27.581136 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.581596 kubelet[2567]: W0421 10:10:27.581142 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.581596 kubelet[2567]: E0421 10:10:27.581148 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.581751 kubelet[2567]: E0421 10:10:27.581700 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.581751 kubelet[2567]: W0421 10:10:27.581707 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.581751 kubelet[2567]: E0421 10:10:27.581714 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.581898 kubelet[2567]: E0421 10:10:27.581884 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.581898 kubelet[2567]: W0421 10:10:27.581895 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.581931 kubelet[2567]: E0421 10:10:27.581901 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.582225 kubelet[2567]: E0421 10:10:27.582212 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.582225 kubelet[2567]: W0421 10:10:27.582221 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.582301 kubelet[2567]: E0421 10:10:27.582228 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.586029 kubelet[2567]: E0421 10:10:27.586011 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.586029 kubelet[2567]: W0421 10:10:27.586026 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.586097 kubelet[2567]: E0421 10:10:27.586035 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.586635 kubelet[2567]: E0421 10:10:27.586621 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.586635 kubelet[2567]: W0421 10:10:27.586633 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.586694 kubelet[2567]: E0421 10:10:27.586640 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.587050 kubelet[2567]: E0421 10:10:27.586882 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.587050 kubelet[2567]: W0421 10:10:27.586899 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.587050 kubelet[2567]: E0421 10:10:27.586906 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.587252 kubelet[2567]: E0421 10:10:27.587231 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.587252 kubelet[2567]: W0421 10:10:27.587243 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.587252 kubelet[2567]: E0421 10:10:27.587249 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.587518 kubelet[2567]: E0421 10:10:27.587501 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.587518 kubelet[2567]: W0421 10:10:27.587511 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.587518 kubelet[2567]: E0421 10:10:27.587518 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.588330 kubelet[2567]: E0421 10:10:27.588224 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.588330 kubelet[2567]: W0421 10:10:27.588235 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.588330 kubelet[2567]: E0421 10:10:27.588245 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.588727 kubelet[2567]: E0421 10:10:27.588648 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.588727 kubelet[2567]: W0421 10:10:27.588658 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.588727 kubelet[2567]: E0421 10:10:27.588667 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.589162 kubelet[2567]: E0421 10:10:27.589016 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.589162 kubelet[2567]: W0421 10:10:27.589026 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.589162 kubelet[2567]: E0421 10:10:27.589035 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.589847 kubelet[2567]: E0421 10:10:27.589813 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.589847 kubelet[2567]: W0421 10:10:27.589825 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.589847 kubelet[2567]: E0421 10:10:27.589835 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.590478 kubelet[2567]: E0421 10:10:27.590197 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.590478 kubelet[2567]: W0421 10:10:27.590207 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.590478 kubelet[2567]: E0421 10:10:27.590214 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.590774 kubelet[2567]: E0421 10:10:27.590764 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.590832 kubelet[2567]: W0421 10:10:27.590823 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.591004 kubelet[2567]: E0421 10:10:27.590888 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.591245 kubelet[2567]: E0421 10:10:27.591235 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.591615 kubelet[2567]: W0421 10:10:27.591290 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.591615 kubelet[2567]: E0421 10:10:27.591302 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.591927 kubelet[2567]: E0421 10:10:27.591918 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.591988 kubelet[2567]: W0421 10:10:27.591980 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.592022 kubelet[2567]: E0421 10:10:27.592016 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.592266 kubelet[2567]: E0421 10:10:27.592257 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.592308 kubelet[2567]: W0421 10:10:27.592301 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.592346 kubelet[2567]: E0421 10:10:27.592339 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.592593 kubelet[2567]: E0421 10:10:27.592559 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.592933 kubelet[2567]: W0421 10:10:27.592850 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.592933 kubelet[2567]: E0421 10:10:27.592864 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.593268 kubelet[2567]: E0421 10:10:27.593192 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.593268 kubelet[2567]: W0421 10:10:27.593202 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.593268 kubelet[2567]: E0421 10:10:27.593211 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.593978 kubelet[2567]: E0421 10:10:27.593553 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.593978 kubelet[2567]: W0421 10:10:27.593563 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.593978 kubelet[2567]: E0421 10:10:27.593590 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:27.594286 kubelet[2567]: E0421 10:10:27.594277 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:27.594326 kubelet[2567]: W0421 10:10:27.594318 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:27.594363 kubelet[2567]: E0421 10:10:27.594355 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.469610 kubelet[2567]: E0421 10:10:28.468253 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:28.589242 kubelet[2567]: E0421 10:10:28.589201 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.589242 kubelet[2567]: W0421 10:10:28.589227 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.589242 kubelet[2567]: E0421 10:10:28.589250 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.589767 kubelet[2567]: E0421 10:10:28.589525 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.589767 kubelet[2567]: W0421 10:10:28.589533 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.589767 kubelet[2567]: E0421 10:10:28.589540 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.589834 kubelet[2567]: E0421 10:10:28.589813 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.589860 kubelet[2567]: W0421 10:10:28.589837 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.589860 kubelet[2567]: E0421 10:10:28.589845 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.590142 kubelet[2567]: E0421 10:10:28.590127 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.590142 kubelet[2567]: W0421 10:10:28.590138 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.590202 kubelet[2567]: E0421 10:10:28.590146 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.590500 kubelet[2567]: E0421 10:10:28.590400 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.590500 kubelet[2567]: W0421 10:10:28.590410 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.590500 kubelet[2567]: E0421 10:10:28.590419 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.590699 kubelet[2567]: E0421 10:10:28.590679 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.590699 kubelet[2567]: W0421 10:10:28.590692 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.590699 kubelet[2567]: E0421 10:10:28.590700 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.590975 kubelet[2567]: E0421 10:10:28.590941 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.590975 kubelet[2567]: W0421 10:10:28.590961 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.590975 kubelet[2567]: E0421 10:10:28.590968 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.591154 kubelet[2567]: E0421 10:10:28.591140 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.591154 kubelet[2567]: W0421 10:10:28.591145 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.591154 kubelet[2567]: E0421 10:10:28.591151 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.591363 kubelet[2567]: E0421 10:10:28.591346 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.591363 kubelet[2567]: W0421 10:10:28.591360 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.591426 kubelet[2567]: E0421 10:10:28.591369 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.591652 kubelet[2567]: E0421 10:10:28.591627 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.591652 kubelet[2567]: W0421 10:10:28.591637 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.591652 kubelet[2567]: E0421 10:10:28.591645 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.591883 kubelet[2567]: E0421 10:10:28.591869 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.591883 kubelet[2567]: W0421 10:10:28.591879 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.591946 kubelet[2567]: E0421 10:10:28.591888 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.592131 kubelet[2567]: E0421 10:10:28.592113 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.592131 kubelet[2567]: W0421 10:10:28.592124 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.592131 kubelet[2567]: E0421 10:10:28.592131 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.592392 kubelet[2567]: E0421 10:10:28.592376 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.592392 kubelet[2567]: W0421 10:10:28.592387 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.592458 kubelet[2567]: E0421 10:10:28.592398 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.592756 kubelet[2567]: E0421 10:10:28.592727 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.592756 kubelet[2567]: W0421 10:10:28.592740 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.592756 kubelet[2567]: E0421 10:10:28.592747 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.592999 kubelet[2567]: E0421 10:10:28.592982 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.592999 kubelet[2567]: W0421 10:10:28.592994 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.593071 kubelet[2567]: E0421 10:10:28.593003 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.594200 kubelet[2567]: E0421 10:10:28.594182 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.594200 kubelet[2567]: W0421 10:10:28.594193 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.594200 kubelet[2567]: E0421 10:10:28.594201 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.594438 kubelet[2567]: E0421 10:10:28.594424 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.594438 kubelet[2567]: W0421 10:10:28.594434 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.594491 kubelet[2567]: E0421 10:10:28.594443 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.594694 kubelet[2567]: E0421 10:10:28.594679 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.594694 kubelet[2567]: W0421 10:10:28.594690 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.594753 kubelet[2567]: E0421 10:10:28.594698 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.594991 kubelet[2567]: E0421 10:10:28.594976 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.594991 kubelet[2567]: W0421 10:10:28.594987 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.595045 kubelet[2567]: E0421 10:10:28.594995 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.595227 kubelet[2567]: E0421 10:10:28.595213 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.595227 kubelet[2567]: W0421 10:10:28.595223 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.595275 kubelet[2567]: E0421 10:10:28.595231 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.595505 kubelet[2567]: E0421 10:10:28.595472 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.595505 kubelet[2567]: W0421 10:10:28.595483 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.595505 kubelet[2567]: E0421 10:10:28.595491 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.595844 kubelet[2567]: E0421 10:10:28.595825 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.595844 kubelet[2567]: W0421 10:10:28.595839 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.595896 kubelet[2567]: E0421 10:10:28.595847 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.596129 kubelet[2567]: E0421 10:10:28.596092 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.596129 kubelet[2567]: W0421 10:10:28.596100 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.596129 kubelet[2567]: E0421 10:10:28.596109 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.596380 kubelet[2567]: E0421 10:10:28.596363 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.596380 kubelet[2567]: W0421 10:10:28.596375 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.596446 kubelet[2567]: E0421 10:10:28.596384 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.596682 kubelet[2567]: E0421 10:10:28.596666 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.596713 kubelet[2567]: W0421 10:10:28.596700 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.596713 kubelet[2567]: E0421 10:10:28.596709 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.597080 kubelet[2567]: E0421 10:10:28.597060 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.597080 kubelet[2567]: W0421 10:10:28.597072 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.597080 kubelet[2567]: E0421 10:10:28.597079 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.597314 kubelet[2567]: E0421 10:10:28.597296 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.597314 kubelet[2567]: W0421 10:10:28.597307 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.597377 kubelet[2567]: E0421 10:10:28.597316 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.597617 kubelet[2567]: E0421 10:10:28.597560 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.597698 kubelet[2567]: W0421 10:10:28.597677 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.597698 kubelet[2567]: E0421 10:10:28.597692 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.598096 kubelet[2567]: E0421 10:10:28.597882 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.598096 kubelet[2567]: W0421 10:10:28.597889 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.598096 kubelet[2567]: E0421 10:10:28.597897 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.598352 kubelet[2567]: E0421 10:10:28.598333 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.598352 kubelet[2567]: W0421 10:10:28.598345 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.598429 kubelet[2567]: E0421 10:10:28.598354 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.598622 kubelet[2567]: E0421 10:10:28.598603 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.598622 kubelet[2567]: W0421 10:10:28.598616 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.598667 kubelet[2567]: E0421 10:10:28.598625 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.599047 kubelet[2567]: E0421 10:10:28.599029 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.599047 kubelet[2567]: W0421 10:10:28.599041 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.599101 kubelet[2567]: E0421 10:10:28.599050 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:28.599272 kubelet[2567]: E0421 10:10:28.599257 2567 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:10:28.599272 kubelet[2567]: W0421 10:10:28.599268 2567 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:10:28.599309 kubelet[2567]: E0421 10:10:28.599277 2567 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:10:29.247091 containerd[1504]: time="2026-04-21T10:10:29.247040090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:29.248160 containerd[1504]: time="2026-04-21T10:10:29.248028445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 21 10:10:29.248991 containerd[1504]: time="2026-04-21T10:10:29.248848279Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:29.251543 containerd[1504]: time="2026-04-21T10:10:29.251031888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:29.251543 containerd[1504]: time="2026-04-21T10:10:29.251453628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.849507396s" Apr 21 10:10:29.251543 containerd[1504]: time="2026-04-21T10:10:29.251474479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:10:29.256198 containerd[1504]: time="2026-04-21T10:10:29.256138221Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:10:29.272498 containerd[1504]: time="2026-04-21T10:10:29.272430661Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9\"" Apr 21 10:10:29.273253 containerd[1504]: time="2026-04-21T10:10:29.273234771Z" level=info msg="StartContainer for \"5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9\"" Apr 21 10:10:29.307749 systemd[1]: Started cri-containerd-5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9.scope - libcontainer container 5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9. Apr 21 10:10:29.337935 containerd[1504]: time="2026-04-21T10:10:29.337672891Z" level=info msg="StartContainer for \"5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9\" returns successfully" Apr 21 10:10:29.349193 systemd[1]: cri-containerd-5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9.scope: Deactivated successfully. Apr 21 10:10:29.369366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9-rootfs.mount: Deactivated successfully. Apr 21 10:10:29.466996 containerd[1504]: time="2026-04-21T10:10:29.466902876Z" level=info msg="shim disconnected" id=5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9 namespace=k8s.io Apr 21 10:10:29.467701 containerd[1504]: time="2026-04-21T10:10:29.466991629Z" level=warning msg="cleaning up after shim disconnected" id=5c58dac5b2f502a759748a37ee97f23c91780e5afa66487057df60f98ce079a9 namespace=k8s.io Apr 21 10:10:29.467701 containerd[1504]: time="2026-04-21T10:10:29.467015515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:10:29.480250 containerd[1504]: time="2026-04-21T10:10:29.480165264Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:10:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:10:29.531208 containerd[1504]: time="2026-04-21T10:10:29.530956051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:10:30.470346 kubelet[2567]: E0421 10:10:30.468114 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:32.468594 kubelet[2567]: E0421 10:10:32.468038 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:33.654423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204447705.mount: Deactivated successfully. Apr 21 10:10:33.690275 containerd[1504]: time="2026-04-21T10:10:33.690195144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:33.691683 containerd[1504]: time="2026-04-21T10:10:33.691445883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:10:33.694437 containerd[1504]: time="2026-04-21T10:10:33.693185273Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:33.695643 containerd[1504]: time="2026-04-21T10:10:33.695611328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:33.696450 containerd[1504]: time="2026-04-21T10:10:33.696205828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.165217789s" Apr 21 10:10:33.696450 containerd[1504]: time="2026-04-21T10:10:33.696244064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:10:33.701701 containerd[1504]: time="2026-04-21T10:10:33.701664195Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:10:33.726860 containerd[1504]: time="2026-04-21T10:10:33.726809918Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c\"" Apr 21 10:10:33.728668 containerd[1504]: time="2026-04-21T10:10:33.728637228Z" level=info msg="StartContainer for \"640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c\"" Apr 21 10:10:33.761714 systemd[1]: Started cri-containerd-640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c.scope - libcontainer container 640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c. Apr 21 10:10:33.790671 containerd[1504]: time="2026-04-21T10:10:33.790628389Z" level=info msg="StartContainer for \"640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c\" returns successfully" Apr 21 10:10:33.824090 systemd[1]: cri-containerd-640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c.scope: Deactivated successfully. Apr 21 10:10:33.924527 containerd[1504]: time="2026-04-21T10:10:33.924407352Z" level=info msg="shim disconnected" id=640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c namespace=k8s.io Apr 21 10:10:33.925596 containerd[1504]: time="2026-04-21T10:10:33.924749394Z" level=warning msg="cleaning up after shim disconnected" id=640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c namespace=k8s.io Apr 21 10:10:33.925596 containerd[1504]: time="2026-04-21T10:10:33.924763094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:10:34.469183 kubelet[2567]: E0421 10:10:34.467909 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:34.546606 containerd[1504]: time="2026-04-21T10:10:34.546528974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:10:34.654923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-640ae17319fbc2f43593ef9c24107bdea9156c61815f7b5d907d47f19852119c-rootfs.mount: Deactivated successfully. Apr 21 10:10:36.469158 kubelet[2567]: E0421 10:10:36.468958 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:37.340359 containerd[1504]: time="2026-04-21T10:10:37.340317675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:37.349483 containerd[1504]: time="2026-04-21T10:10:37.349398500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:10:37.350648 containerd[1504]: time="2026-04-21T10:10:37.350491686Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:37.354226 containerd[1504]: time="2026-04-21T10:10:37.354205191Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:37.355257 containerd[1504]: time="2026-04-21T10:10:37.355219649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.808564727s" Apr 21 10:10:37.355297 containerd[1504]: time="2026-04-21T10:10:37.355262513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:10:37.377174 containerd[1504]: time="2026-04-21T10:10:37.377109416Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:10:37.414796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028845589.mount: Deactivated successfully. Apr 21 10:10:37.429949 containerd[1504]: time="2026-04-21T10:10:37.429891278Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c\"" Apr 21 10:10:37.430511 containerd[1504]: time="2026-04-21T10:10:37.430474981Z" level=info msg="StartContainer for \"3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c\"" Apr 21 10:10:37.462208 systemd[1]: Started cri-containerd-3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c.scope - libcontainer container 3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c. Apr 21 10:10:37.491037 containerd[1504]: time="2026-04-21T10:10:37.490997222Z" level=info msg="StartContainer for \"3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c\" returns successfully" Apr 21 10:10:37.943636 containerd[1504]: time="2026-04-21T10:10:37.943586627Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:10:37.946754 systemd[1]: cri-containerd-3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c.scope: Deactivated successfully. Apr 21 10:10:37.964491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c-rootfs.mount: Deactivated successfully. Apr 21 10:10:37.985643 kubelet[2567]: I0421 10:10:37.984365 2567 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:10:38.015766 containerd[1504]: time="2026-04-21T10:10:38.015617088Z" level=info msg="shim disconnected" id=3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c namespace=k8s.io Apr 21 10:10:38.015766 containerd[1504]: time="2026-04-21T10:10:38.015669807Z" level=warning msg="cleaning up after shim disconnected" id=3747bcb9cc182b3fcb3bbdaa9e9d72dac767b62b5ea6c34e59c185acf7bdf04c namespace=k8s.io Apr 21 10:10:38.015766 containerd[1504]: time="2026-04-21T10:10:38.015676828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:10:38.025999 systemd[1]: Created slice kubepods-burstable-pod89203aa1_f58e_43c1_aeac_389c8c4e354d.slice - libcontainer container kubepods-burstable-pod89203aa1_f58e_43c1_aeac_389c8c4e354d.slice. Apr 21 10:10:38.040718 systemd[1]: Created slice kubepods-besteffort-podb3626028_18b5_47bc_859d_14b0862e581b.slice - libcontainer container kubepods-besteffort-podb3626028_18b5_47bc_859d_14b0862e581b.slice. Apr 21 10:10:38.059800 kubelet[2567]: I0421 10:10:38.058969 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kwx\" (UniqueName: \"kubernetes.io/projected/b3626028-18b5-47bc-859d-14b0862e581b-kube-api-access-h9kwx\") pod \"calico-kube-controllers-9cb6dbfd9-g7plt\" (UID: \"b3626028-18b5-47bc-859d-14b0862e581b\") " pod="calico-system/calico-kube-controllers-9cb6dbfd9-g7plt" Apr 21 10:10:38.059800 kubelet[2567]: I0421 10:10:38.059002 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c0203ee-d1e8-4697-85ba-00626b1ad292-calico-apiserver-certs\") pod \"calico-apiserver-5c9bd8455-nbw87\" (UID: \"4c0203ee-d1e8-4697-85ba-00626b1ad292\") " pod="calico-system/calico-apiserver-5c9bd8455-nbw87" Apr 21 10:10:38.059800 kubelet[2567]: I0421 10:10:38.059015 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3626028-18b5-47bc-859d-14b0862e581b-tigera-ca-bundle\") pod \"calico-kube-controllers-9cb6dbfd9-g7plt\" (UID: \"b3626028-18b5-47bc-859d-14b0862e581b\") " pod="calico-system/calico-kube-controllers-9cb6dbfd9-g7plt" Apr 21 10:10:38.059800 kubelet[2567]: I0421 10:10:38.059029 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce1fbcf9-9c29-4f48-8a24-9477c46cb787-config-volume\") pod \"coredns-674b8bbfcf-d9sdn\" (UID: \"ce1fbcf9-9c29-4f48-8a24-9477c46cb787\") " pod="kube-system/coredns-674b8bbfcf-d9sdn" Apr 21 10:10:38.059800 kubelet[2567]: I0421 10:10:38.059040 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2rb\" (UniqueName: \"kubernetes.io/projected/ce1fbcf9-9c29-4f48-8a24-9477c46cb787-kube-api-access-7v2rb\") pod \"coredns-674b8bbfcf-d9sdn\" (UID: \"ce1fbcf9-9c29-4f48-8a24-9477c46cb787\") " pod="kube-system/coredns-674b8bbfcf-d9sdn" Apr 21 10:10:38.060041 kubelet[2567]: I0421 10:10:38.059051 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-backend-key-pair\") pod \"whisker-969547dbb-pw986\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " pod="calico-system/whisker-969547dbb-pw986" Apr 21 10:10:38.060041 kubelet[2567]: I0421 10:10:38.059064 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhh8\" (UniqueName: \"kubernetes.io/projected/4c0203ee-d1e8-4697-85ba-00626b1ad292-kube-api-access-lmhh8\") pod \"calico-apiserver-5c9bd8455-nbw87\" (UID: \"4c0203ee-d1e8-4697-85ba-00626b1ad292\") " pod="calico-system/calico-apiserver-5c9bd8455-nbw87" Apr 21 10:10:38.060041 kubelet[2567]: I0421 10:10:38.059074 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9c8\" (UniqueName: \"kubernetes.io/projected/37a77d25-20a9-48a5-b82c-bd7602bba49a-kube-api-access-kx9c8\") pod \"whisker-969547dbb-pw986\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " pod="calico-system/whisker-969547dbb-pw986" Apr 21 10:10:38.060041 kubelet[2567]: I0421 10:10:38.059091 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-ca-bundle\") pod \"whisker-969547dbb-pw986\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " pod="calico-system/whisker-969547dbb-pw986" Apr 21 10:10:38.060041 kubelet[2567]: I0421 10:10:38.059110 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93fb5265-098d-445f-9cde-fcef06ca57d0-config\") pod \"goldmane-5b85766d88-8gxsz\" (UID: \"93fb5265-098d-445f-9cde-fcef06ca57d0\") " pod="calico-system/goldmane-5b85766d88-8gxsz" Apr 21 10:10:38.060131 kubelet[2567]: I0421 10:10:38.059122 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djzbp\" (UniqueName: \"kubernetes.io/projected/93fb5265-098d-445f-9cde-fcef06ca57d0-kube-api-access-djzbp\") pod \"goldmane-5b85766d88-8gxsz\" (UID: \"93fb5265-098d-445f-9cde-fcef06ca57d0\") " pod="calico-system/goldmane-5b85766d88-8gxsz" Apr 21 10:10:38.060131 kubelet[2567]: I0421 10:10:38.059133 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89203aa1-f58e-43c1-aeac-389c8c4e354d-config-volume\") pod \"coredns-674b8bbfcf-sr8nw\" (UID: \"89203aa1-f58e-43c1-aeac-389c8c4e354d\") " pod="kube-system/coredns-674b8bbfcf-sr8nw" Apr 21 10:10:38.060131 kubelet[2567]: I0421 10:10:38.059144 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-nginx-config\") pod \"whisker-969547dbb-pw986\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " pod="calico-system/whisker-969547dbb-pw986" Apr 21 10:10:38.060131 kubelet[2567]: I0421 10:10:38.059156 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93fb5265-098d-445f-9cde-fcef06ca57d0-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-8gxsz\" (UID: \"93fb5265-098d-445f-9cde-fcef06ca57d0\") " pod="calico-system/goldmane-5b85766d88-8gxsz" Apr 21 10:10:38.060131 kubelet[2567]: I0421 10:10:38.059167 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/93fb5265-098d-445f-9cde-fcef06ca57d0-goldmane-key-pair\") pod \"goldmane-5b85766d88-8gxsz\" (UID: \"93fb5265-098d-445f-9cde-fcef06ca57d0\") " pod="calico-system/goldmane-5b85766d88-8gxsz" Apr 21 10:10:38.060217 kubelet[2567]: I0421 10:10:38.059185 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tltjl\" (UniqueName: \"kubernetes.io/projected/89203aa1-f58e-43c1-aeac-389c8c4e354d-kube-api-access-tltjl\") pod \"coredns-674b8bbfcf-sr8nw\" (UID: \"89203aa1-f58e-43c1-aeac-389c8c4e354d\") " pod="kube-system/coredns-674b8bbfcf-sr8nw" Apr 21 10:10:38.065928 systemd[1]: Created slice kubepods-besteffort-pod4c0203ee_d1e8_4697_85ba_00626b1ad292.slice - libcontainer container kubepods-besteffort-pod4c0203ee_d1e8_4697_85ba_00626b1ad292.slice. Apr 21 10:10:38.074652 systemd[1]: Created slice kubepods-besteffort-pod37a77d25_20a9_48a5_b82c_bd7602bba49a.slice - libcontainer container kubepods-besteffort-pod37a77d25_20a9_48a5_b82c_bd7602bba49a.slice. Apr 21 10:10:38.082009 systemd[1]: Created slice kubepods-besteffort-pod93fb5265_098d_445f_9cde_fcef06ca57d0.slice - libcontainer container kubepods-besteffort-pod93fb5265_098d_445f_9cde_fcef06ca57d0.slice. Apr 21 10:10:38.089013 systemd[1]: Created slice kubepods-burstable-podce1fbcf9_9c29_4f48_8a24_9477c46cb787.slice - libcontainer container kubepods-burstable-podce1fbcf9_9c29_4f48_8a24_9477c46cb787.slice. Apr 21 10:10:38.094439 systemd[1]: Created slice kubepods-besteffort-poda753325e_f90f_4e6b_a9d0_a84ed1028eab.slice - libcontainer container kubepods-besteffort-poda753325e_f90f_4e6b_a9d0_a84ed1028eab.slice. Apr 21 10:10:38.159912 kubelet[2567]: I0421 10:10:38.159814 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz2kh\" (UniqueName: \"kubernetes.io/projected/a753325e-f90f-4e6b-a9d0-a84ed1028eab-kube-api-access-zz2kh\") pod \"calico-apiserver-5c9bd8455-plt4c\" (UID: \"a753325e-f90f-4e6b-a9d0-a84ed1028eab\") " pod="calico-system/calico-apiserver-5c9bd8455-plt4c" Apr 21 10:10:38.159912 kubelet[2567]: I0421 10:10:38.159851 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a753325e-f90f-4e6b-a9d0-a84ed1028eab-calico-apiserver-certs\") pod \"calico-apiserver-5c9bd8455-plt4c\" (UID: \"a753325e-f90f-4e6b-a9d0-a84ed1028eab\") " pod="calico-system/calico-apiserver-5c9bd8455-plt4c" Apr 21 10:10:38.338135 containerd[1504]: time="2026-04-21T10:10:38.337960081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr8nw,Uid:89203aa1-f58e-43c1-aeac-389c8c4e354d,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:38.368142 containerd[1504]: time="2026-04-21T10:10:38.366259124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb6dbfd9-g7plt,Uid:b3626028-18b5-47bc-859d-14b0862e581b,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:38.373825 containerd[1504]: time="2026-04-21T10:10:38.373732477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-nbw87,Uid:4c0203ee-d1e8-4697-85ba-00626b1ad292,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:38.380008 containerd[1504]: time="2026-04-21T10:10:38.379623072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-969547dbb-pw986,Uid:37a77d25-20a9-48a5-b82c-bd7602bba49a,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:38.388190 containerd[1504]: time="2026-04-21T10:10:38.388149790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8gxsz,Uid:93fb5265-098d-445f-9cde-fcef06ca57d0,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:38.393015 containerd[1504]: time="2026-04-21T10:10:38.392980498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9sdn,Uid:ce1fbcf9-9c29-4f48-8a24-9477c46cb787,Namespace:kube-system,Attempt:0,}" Apr 21 10:10:38.397803 containerd[1504]: time="2026-04-21T10:10:38.397611928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-plt4c,Uid:a753325e-f90f-4e6b-a9d0-a84ed1028eab,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:38.480058 systemd[1]: Created slice kubepods-besteffort-poda5cda479_2770_45d5_b204_a8ebaf013eb6.slice - libcontainer container kubepods-besteffort-poda5cda479_2770_45d5_b204_a8ebaf013eb6.slice. Apr 21 10:10:38.482589 containerd[1504]: time="2026-04-21T10:10:38.482428319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f724,Uid:a5cda479-2770-45d5-b204-a8ebaf013eb6,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:38.541938 containerd[1504]: time="2026-04-21T10:10:38.541899375Z" level=error msg="Failed to destroy network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.543164 containerd[1504]: time="2026-04-21T10:10:38.543144830Z" level=error msg="encountered an error cleaning up failed sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.549101 containerd[1504]: time="2026-04-21T10:10:38.548923977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr8nw,Uid:89203aa1-f58e-43c1-aeac-389c8c4e354d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.549257 kubelet[2567]: E0421 10:10:38.549220 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.549316 kubelet[2567]: E0421 10:10:38.549284 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sr8nw" Apr 21 10:10:38.549316 kubelet[2567]: E0421 10:10:38.549304 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sr8nw" Apr 21 10:10:38.549486 kubelet[2567]: E0421 10:10:38.549350 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sr8nw_kube-system(89203aa1-f58e-43c1-aeac-389c8c4e354d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sr8nw_kube-system(89203aa1-f58e-43c1-aeac-389c8c4e354d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sr8nw" podUID="89203aa1-f58e-43c1-aeac-389c8c4e354d" Apr 21 10:10:38.590476 kubelet[2567]: I0421 10:10:38.590301 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:10:38.591908 containerd[1504]: time="2026-04-21T10:10:38.591812721Z" level=info msg="StopPodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\"" Apr 21 10:10:38.592465 containerd[1504]: time="2026-04-21T10:10:38.592332469Z" level=info msg="Ensure that sandbox e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6 in task-service has been cleanup successfully" Apr 21 10:10:38.593312 containerd[1504]: time="2026-04-21T10:10:38.593161249Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:10:38.602231 containerd[1504]: time="2026-04-21T10:10:38.602041117Z" level=error msg="Failed to destroy network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.603417 containerd[1504]: time="2026-04-21T10:10:38.603396754Z" level=error msg="encountered an error cleaning up failed sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.603672 containerd[1504]: time="2026-04-21T10:10:38.603544967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb6dbfd9-g7plt,Uid:b3626028-18b5-47bc-859d-14b0862e581b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.603901 kubelet[2567]: E0421 10:10:38.603856 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.604070 kubelet[2567]: E0421 10:10:38.603966 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cb6dbfd9-g7plt" Apr 21 10:10:38.604070 kubelet[2567]: E0421 10:10:38.603987 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9cb6dbfd9-g7plt" Apr 21 10:10:38.604168 kubelet[2567]: E0421 10:10:38.604140 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9cb6dbfd9-g7plt_calico-system(b3626028-18b5-47bc-859d-14b0862e581b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9cb6dbfd9-g7plt_calico-system(b3626028-18b5-47bc-859d-14b0862e581b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9cb6dbfd9-g7plt" podUID="b3626028-18b5-47bc-859d-14b0862e581b" Apr 21 10:10:38.633741 containerd[1504]: time="2026-04-21T10:10:38.633660678Z" level=error msg="Failed to destroy network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.634042 containerd[1504]: time="2026-04-21T10:10:38.634021708Z" level=error msg="encountered an error cleaning up failed sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.634090 containerd[1504]: time="2026-04-21T10:10:38.634066796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-nbw87,Uid:4c0203ee-d1e8-4697-85ba-00626b1ad292,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.634284 kubelet[2567]: E0421 10:10:38.634240 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.634416 kubelet[2567]: E0421 10:10:38.634285 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c9bd8455-nbw87" Apr 21 10:10:38.634416 kubelet[2567]: E0421 10:10:38.634307 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c9bd8455-nbw87" Apr 21 10:10:38.634416 kubelet[2567]: E0421 10:10:38.634345 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9bd8455-nbw87_calico-system(4c0203ee-d1e8-4697-85ba-00626b1ad292)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9bd8455-nbw87_calico-system(4c0203ee-d1e8-4697-85ba-00626b1ad292)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c9bd8455-nbw87" podUID="4c0203ee-d1e8-4697-85ba-00626b1ad292" Apr 21 10:10:38.638964 containerd[1504]: time="2026-04-21T10:10:38.638881250Z" level=info msg="CreateContainer within sandbox \"0958766b0c9a06818d424d13447c57010b4107bfe095ee1b17544bcf4e1d4c2a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e97d8e08b061d36ac26c0a8a4ff9047514af1be43302e9530d9c81cbbf33c58d\"" Apr 21 10:10:38.641010 containerd[1504]: time="2026-04-21T10:10:38.640973030Z" level=error msg="Failed to destroy network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.641145 containerd[1504]: time="2026-04-21T10:10:38.641119970Z" level=info msg="StartContainer for \"e97d8e08b061d36ac26c0a8a4ff9047514af1be43302e9530d9c81cbbf33c58d\"" Apr 21 10:10:38.641554 containerd[1504]: time="2026-04-21T10:10:38.641505868Z" level=error msg="encountered an error cleaning up failed sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.641613 containerd[1504]: time="2026-04-21T10:10:38.641599367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-969547dbb-pw986,Uid:37a77d25-20a9-48a5-b82c-bd7602bba49a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.642029 kubelet[2567]: E0421 10:10:38.641830 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.642029 kubelet[2567]: E0421 10:10:38.641877 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-969547dbb-pw986" Apr 21 10:10:38.642029 kubelet[2567]: E0421 10:10:38.641896 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-969547dbb-pw986" Apr 21 10:10:38.642114 kubelet[2567]: E0421 10:10:38.641932 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-969547dbb-pw986_calico-system(37a77d25-20a9-48a5-b82c-bd7602bba49a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-969547dbb-pw986_calico-system(37a77d25-20a9-48a5-b82c-bd7602bba49a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-969547dbb-pw986" podUID="37a77d25-20a9-48a5-b82c-bd7602bba49a" Apr 21 10:10:38.657550 containerd[1504]: time="2026-04-21T10:10:38.657510675Z" level=error msg="Failed to destroy network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.657990 containerd[1504]: time="2026-04-21T10:10:38.657972777Z" level=error msg="encountered an error cleaning up failed sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.658074 containerd[1504]: time="2026-04-21T10:10:38.658058765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8gxsz,Uid:93fb5265-098d-445f-9cde-fcef06ca57d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.658801 kubelet[2567]: E0421 10:10:38.658754 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.658857 kubelet[2567]: E0421 10:10:38.658810 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-8gxsz" Apr 21 10:10:38.658857 kubelet[2567]: E0421 10:10:38.658830 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-8gxsz" Apr 21 10:10:38.658921 kubelet[2567]: E0421 10:10:38.658889 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-8gxsz_calico-system(93fb5265-098d-445f-9cde-fcef06ca57d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-8gxsz_calico-system(93fb5265-098d-445f-9cde-fcef06ca57d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-8gxsz" podUID="93fb5265-098d-445f-9cde-fcef06ca57d0" Apr 21 10:10:38.660141 containerd[1504]: time="2026-04-21T10:10:38.660121121Z" level=error msg="Failed to destroy network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.660638 containerd[1504]: time="2026-04-21T10:10:38.660540849Z" level=error msg="encountered an error cleaning up failed sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.660881 containerd[1504]: time="2026-04-21T10:10:38.660829542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f724,Uid:a5cda479-2770-45d5-b204-a8ebaf013eb6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.661129 kubelet[2567]: E0421 10:10:38.661025 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.661129 kubelet[2567]: E0421 10:10:38.661057 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:38.661129 kubelet[2567]: E0421 10:10:38.661071 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7f724" Apr 21 10:10:38.661242 kubelet[2567]: E0421 10:10:38.661106 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7f724_calico-system(a5cda479-2770-45d5-b204-a8ebaf013eb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7f724_calico-system(a5cda479-2770-45d5-b204-a8ebaf013eb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7f724" podUID="a5cda479-2770-45d5-b204-a8ebaf013eb6" Apr 21 10:10:38.674221 containerd[1504]: time="2026-04-21T10:10:38.674109032Z" level=error msg="StopPodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" failed" error="failed to destroy network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.674339 kubelet[2567]: E0421 10:10:38.674300 2567 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:10:38.674374 kubelet[2567]: E0421 10:10:38.674345 2567 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6"} Apr 21 10:10:38.674393 kubelet[2567]: E0421 10:10:38.674385 2567 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"89203aa1-f58e-43c1-aeac-389c8c4e354d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 21 10:10:38.674444 kubelet[2567]: E0421 10:10:38.674404 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"89203aa1-f58e-43c1-aeac-389c8c4e354d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sr8nw" podUID="89203aa1-f58e-43c1-aeac-389c8c4e354d" Apr 21 10:10:38.674648 containerd[1504]: time="2026-04-21T10:10:38.674630902Z" level=error msg="Failed to destroy network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.674990 containerd[1504]: time="2026-04-21T10:10:38.674973886Z" level=error msg="encountered an error cleaning up failed sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.675057 containerd[1504]: time="2026-04-21T10:10:38.675043690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-plt4c,Uid:a753325e-f90f-4e6b-a9d0-a84ed1028eab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.675200 kubelet[2567]: E0421 10:10:38.675178 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.675239 kubelet[2567]: E0421 10:10:38.675207 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c9bd8455-plt4c" Apr 21 10:10:38.675239 kubelet[2567]: E0421 10:10:38.675230 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5c9bd8455-plt4c" Apr 21 10:10:38.675293 kubelet[2567]: E0421 10:10:38.675268 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9bd8455-plt4c_calico-system(a753325e-f90f-4e6b-a9d0-a84ed1028eab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9bd8455-plt4c_calico-system(a753325e-f90f-4e6b-a9d0-a84ed1028eab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5c9bd8455-plt4c" podUID="a753325e-f90f-4e6b-a9d0-a84ed1028eab" Apr 21 10:10:38.681682 containerd[1504]: time="2026-04-21T10:10:38.681647140Z" level=error msg="Failed to destroy network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.682113 containerd[1504]: time="2026-04-21T10:10:38.682022903Z" level=error msg="encountered an error cleaning up failed sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.682113 containerd[1504]: time="2026-04-21T10:10:38.682058336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9sdn,Uid:ce1fbcf9-9c29-4f48-8a24-9477c46cb787,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.682766 kubelet[2567]: E0421 10:10:38.682236 2567 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:10:38.682766 kubelet[2567]: E0421 10:10:38.682266 2567 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d9sdn" Apr 21 10:10:38.682766 kubelet[2567]: E0421 10:10:38.682282 2567 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d9sdn" Apr 21 10:10:38.682848 kubelet[2567]: E0421 10:10:38.682311 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d9sdn_kube-system(ce1fbcf9-9c29-4f48-8a24-9477c46cb787)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d9sdn_kube-system(ce1fbcf9-9c29-4f48-8a24-9477c46cb787)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d9sdn" podUID="ce1fbcf9-9c29-4f48-8a24-9477c46cb787" Apr 21 10:10:38.691673 systemd[1]: Started cri-containerd-e97d8e08b061d36ac26c0a8a4ff9047514af1be43302e9530d9c81cbbf33c58d.scope - libcontainer container e97d8e08b061d36ac26c0a8a4ff9047514af1be43302e9530d9c81cbbf33c58d. Apr 21 10:10:38.721445 containerd[1504]: time="2026-04-21T10:10:38.721406503Z" level=info msg="StartContainer for \"e97d8e08b061d36ac26c0a8a4ff9047514af1be43302e9530d9c81cbbf33c58d\" returns successfully" Apr 21 10:10:39.423950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d-shm.mount: Deactivated successfully. Apr 21 10:10:39.424175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af-shm.mount: Deactivated successfully. Apr 21 10:10:39.424359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7-shm.mount: Deactivated successfully. Apr 21 10:10:39.424529 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a-shm.mount: Deactivated successfully. Apr 21 10:10:39.424736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6-shm.mount: Deactivated successfully. Apr 21 10:10:39.595036 kubelet[2567]: I0421 10:10:39.594982 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:10:39.596891 containerd[1504]: time="2026-04-21T10:10:39.596156613Z" level=info msg="StopPodSandbox for \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\"" Apr 21 10:10:39.596891 containerd[1504]: time="2026-04-21T10:10:39.596641228Z" level=info msg="Ensure that sandbox 6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530 in task-service has been cleanup successfully" Apr 21 10:10:39.600305 kubelet[2567]: I0421 10:10:39.600238 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:10:39.601642 containerd[1504]: time="2026-04-21T10:10:39.600996467Z" level=info msg="StopPodSandbox for \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\"" Apr 21 10:10:39.603751 containerd[1504]: time="2026-04-21T10:10:39.603427184Z" level=info msg="Ensure that sandbox fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af in task-service has been cleanup successfully" Apr 21 10:10:39.618281 kubelet[2567]: I0421 10:10:39.618194 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:10:39.623233 containerd[1504]: time="2026-04-21T10:10:39.622727608Z" level=info msg="StopPodSandbox for \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\"" Apr 21 10:10:39.623233 containerd[1504]: time="2026-04-21T10:10:39.622953577Z" level=info msg="Ensure that sandbox 93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16 in task-service has been cleanup successfully" Apr 21 10:10:39.626014 kubelet[2567]: I0421 10:10:39.625948 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:10:39.628181 containerd[1504]: time="2026-04-21T10:10:39.628137665Z" level=info msg="StopPodSandbox for \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\"" Apr 21 10:10:39.630087 containerd[1504]: time="2026-04-21T10:10:39.629927003Z" level=info msg="Ensure that sandbox c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d in task-service has been cleanup successfully" Apr 21 10:10:39.633045 kubelet[2567]: I0421 10:10:39.633002 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:10:39.634052 containerd[1504]: time="2026-04-21T10:10:39.634023965Z" level=info msg="StopPodSandbox for \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\"" Apr 21 10:10:39.634224 containerd[1504]: time="2026-04-21T10:10:39.634136323Z" level=info msg="Ensure that sandbox 6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c in task-service has been cleanup successfully" Apr 21 10:10:39.640224 kubelet[2567]: I0421 10:10:39.640127 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:10:39.641609 containerd[1504]: time="2026-04-21T10:10:39.641256669Z" level=info msg="StopPodSandbox for \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\"" Apr 21 10:10:39.641609 containerd[1504]: time="2026-04-21T10:10:39.641403289Z" level=info msg="Ensure that sandbox ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7 in task-service has been cleanup successfully" Apr 21 10:10:39.648299 kubelet[2567]: I0421 10:10:39.648248 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l9qqk" podStartSLOduration=3.644268028 podStartE2EDuration="15.648234272s" podCreationTimestamp="2026-04-21 10:10:24 +0000 UTC" firstStartedPulling="2026-04-21 10:10:25.352273461 +0000 UTC m=+14.961783535" lastFinishedPulling="2026-04-21 10:10:37.356239705 +0000 UTC m=+26.965749779" observedRunningTime="2026-04-21 10:10:39.641129849 +0000 UTC m=+29.250639933" watchObservedRunningTime="2026-04-21 10:10:39.648234272 +0000 UTC m=+29.257744336" Apr 21 10:10:39.650306 kubelet[2567]: I0421 10:10:39.650006 2567 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:10:39.660212 containerd[1504]: time="2026-04-21T10:10:39.660128302Z" level=info msg="StopPodSandbox for \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\"" Apr 21 10:10:39.660637 containerd[1504]: time="2026-04-21T10:10:39.660609833Z" level=info msg="Ensure that sandbox b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a in task-service has been cleanup successfully" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.766 [INFO][3787] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.766 [INFO][3787] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" iface="eth0" netns="/var/run/netns/cni-fdeedef1-8843-6204-f05c-e4186a605271" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.766 [INFO][3787] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" iface="eth0" netns="/var/run/netns/cni-fdeedef1-8843-6204-f05c-e4186a605271" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.766 [INFO][3787] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" iface="eth0" netns="/var/run/netns/cni-fdeedef1-8843-6204-f05c-e4186a605271" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.766 [INFO][3787] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.767 [INFO][3787] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.830 [INFO][3846] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.833 [INFO][3846] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.833 [INFO][3846] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.841 [WARNING][3846] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.841 [INFO][3846] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.842 [INFO][3846] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.869384 containerd[1504]: 2026-04-21 10:10:39.860 [INFO][3787] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:10:39.871848 containerd[1504]: time="2026-04-21T10:10:39.871608199Z" level=info msg="TearDown network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\" successfully" Apr 21 10:10:39.871848 containerd[1504]: time="2026-04-21T10:10:39.871637033Z" level=info msg="StopPodSandbox for \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\" returns successfully" Apr 21 10:10:39.875437 systemd[1]: run-netns-cni\x2dfdeedef1\x2d8843\x2d6204\x2df05c\x2de4186a605271.mount: Deactivated successfully. Apr 21 10:10:39.877740 containerd[1504]: time="2026-04-21T10:10:39.877721108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-plt4c,Uid:a753325e-f90f-4e6b-a9d0-a84ed1028eab,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.804 [INFO][3799] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.807 [INFO][3799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" iface="eth0" netns="/var/run/netns/cni-1017c046-9c4a-242d-9cbf-625329249335" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.807 [INFO][3799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" iface="eth0" netns="/var/run/netns/cni-1017c046-9c4a-242d-9cbf-625329249335" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.810 [INFO][3799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" iface="eth0" netns="/var/run/netns/cni-1017c046-9c4a-242d-9cbf-625329249335" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.810 [INFO][3799] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.810 [INFO][3799] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.854 [INFO][3878] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.854 [INFO][3878] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.854 [INFO][3878] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.859 [WARNING][3878] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.859 [INFO][3878] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.861 [INFO][3878] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.878336 containerd[1504]: 2026-04-21 10:10:39.871 [INFO][3799] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:10:39.879473 containerd[1504]: time="2026-04-21T10:10:39.879456435Z" level=info msg="TearDown network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\" successfully" Apr 21 10:10:39.879533 containerd[1504]: time="2026-04-21T10:10:39.879523446Z" level=info msg="StopPodSandbox for \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\" returns successfully" Apr 21 10:10:39.881118 systemd[1]: run-netns-cni\x2d1017c046\x2d9c4a\x2d242d\x2d9cbf\x2d625329249335.mount: Deactivated successfully. Apr 21 10:10:39.881738 containerd[1504]: time="2026-04-21T10:10:39.881717799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8gxsz,Uid:93fb5265-098d-445f-9cde-fcef06ca57d0,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.759 [INFO][3764] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.759 [INFO][3764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" iface="eth0" netns="/var/run/netns/cni-5fcebf3d-b2e0-d6c6-3516-aae0af059155" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.760 [INFO][3764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" iface="eth0" netns="/var/run/netns/cni-5fcebf3d-b2e0-d6c6-3516-aae0af059155" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.768 [INFO][3764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" iface="eth0" netns="/var/run/netns/cni-5fcebf3d-b2e0-d6c6-3516-aae0af059155" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.768 [INFO][3764] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.768 [INFO][3764] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.855 [INFO][3850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.855 [INFO][3850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.862 [INFO][3850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.877 [WARNING][3850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.877 [INFO][3850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.882 [INFO][3850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.895729 containerd[1504]: 2026-04-21 10:10:39.889 [INFO][3764] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:10:39.896366 containerd[1504]: time="2026-04-21T10:10:39.896339001Z" level=info msg="TearDown network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\" successfully" Apr 21 10:10:39.897065 containerd[1504]: time="2026-04-21T10:10:39.897045419Z" level=info msg="StopPodSandbox for \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\" returns successfully" Apr 21 10:10:39.901378 systemd[1]: run-netns-cni\x2d5fcebf3d\x2db2e0\x2dd6c6\x2d3516\x2daae0af059155.mount: Deactivated successfully. Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.767 [INFO][3822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.767 [INFO][3822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" iface="eth0" netns="/var/run/netns/cni-cae9e9ce-4034-58eb-ed77-ef7dfbe2af29" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.767 [INFO][3822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" iface="eth0" netns="/var/run/netns/cni-cae9e9ce-4034-58eb-ed77-ef7dfbe2af29" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.768 [INFO][3822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" iface="eth0" netns="/var/run/netns/cni-cae9e9ce-4034-58eb-ed77-ef7dfbe2af29" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.768 [INFO][3822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.768 [INFO][3822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.874 [INFO][3849] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.874 [INFO][3849] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.882 [INFO][3849] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.893 [WARNING][3849] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.893 [INFO][3849] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.897 [INFO][3849] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.917533 containerd[1504]: 2026-04-21 10:10:39.912 [INFO][3822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:10:39.918337 containerd[1504]: time="2026-04-21T10:10:39.918061221Z" level=info msg="TearDown network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\" successfully" Apr 21 10:10:39.918337 containerd[1504]: time="2026-04-21T10:10:39.918190884Z" level=info msg="StopPodSandbox for \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\" returns successfully" Apr 21 10:10:39.920774 containerd[1504]: time="2026-04-21T10:10:39.920743745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-nbw87,Uid:4c0203ee-d1e8-4697-85ba-00626b1ad292,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.770 [INFO][3754] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.770 [INFO][3754] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" iface="eth0" netns="/var/run/netns/cni-f2769d55-e6cf-9c66-32d3-81d785f911b6" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.771 [INFO][3754] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" iface="eth0" netns="/var/run/netns/cni-f2769d55-e6cf-9c66-32d3-81d785f911b6" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.771 [INFO][3754] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" iface="eth0" netns="/var/run/netns/cni-f2769d55-e6cf-9c66-32d3-81d785f911b6" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.771 [INFO][3754] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.771 [INFO][3754] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.885 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.886 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.898 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.919 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.919 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.921 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.930719 containerd[1504]: 2026-04-21 10:10:39.927 [INFO][3754] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:10:39.932373 containerd[1504]: time="2026-04-21T10:10:39.932095764Z" level=info msg="TearDown network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\" successfully" Apr 21 10:10:39.932373 containerd[1504]: time="2026-04-21T10:10:39.932307321Z" level=info msg="StopPodSandbox for \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\" returns successfully" Apr 21 10:10:39.933064 containerd[1504]: time="2026-04-21T10:10:39.933049782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f724,Uid:a5cda479-2770-45d5-b204-a8ebaf013eb6,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.787 [INFO][3808] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.787 [INFO][3808] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" iface="eth0" netns="/var/run/netns/cni-84981270-5148-b058-a99b-68c79c227e30" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.787 [INFO][3808] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" iface="eth0" netns="/var/run/netns/cni-84981270-5148-b058-a99b-68c79c227e30" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.787 [INFO][3808] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" iface="eth0" netns="/var/run/netns/cni-84981270-5148-b058-a99b-68c79c227e30" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.787 [INFO][3808] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.787 [INFO][3808] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.890 [INFO][3872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.890 [INFO][3872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.922 [INFO][3872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.931 [WARNING][3872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.931 [INFO][3872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.937 [INFO][3872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.949178 containerd[1504]: 2026-04-21 10:10:39.943 [INFO][3808] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:10:39.949726 containerd[1504]: time="2026-04-21T10:10:39.949410237Z" level=info msg="TearDown network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\" successfully" Apr 21 10:10:39.949836 containerd[1504]: time="2026-04-21T10:10:39.949592971Z" level=info msg="StopPodSandbox for \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\" returns successfully" Apr 21 10:10:39.951545 containerd[1504]: time="2026-04-21T10:10:39.951511702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9sdn,Uid:ce1fbcf9-9c29-4f48-8a24-9477c46cb787,Namespace:kube-system,Attempt:1,}" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.773 [INFO][3823] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.773 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" iface="eth0" netns="/var/run/netns/cni-7825d62b-f082-f194-3332-7ed3cf015e6d" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.774 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" iface="eth0" netns="/var/run/netns/cni-7825d62b-f082-f194-3332-7ed3cf015e6d" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.774 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" iface="eth0" netns="/var/run/netns/cni-7825d62b-f082-f194-3332-7ed3cf015e6d" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.774 [INFO][3823] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.774 [INFO][3823] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.892 [INFO][3864] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.892 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.935 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.942 [WARNING][3864] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.943 [INFO][3864] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.944 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:39.960065 containerd[1504]: 2026-04-21 10:10:39.952 [INFO][3823] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:10:39.960342 containerd[1504]: time="2026-04-21T10:10:39.960176060Z" level=info msg="TearDown network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\" successfully" Apr 21 10:10:39.960342 containerd[1504]: time="2026-04-21T10:10:39.960194686Z" level=info msg="StopPodSandbox for \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\" returns successfully" Apr 21 10:10:39.961944 containerd[1504]: time="2026-04-21T10:10:39.961705077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb6dbfd9-g7plt,Uid:b3626028-18b5-47bc-859d-14b0862e581b,Namespace:calico-system,Attempt:1,}" Apr 21 10:10:39.977036 kubelet[2567]: I0421 10:10:39.976998 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx9c8\" (UniqueName: \"kubernetes.io/projected/37a77d25-20a9-48a5-b82c-bd7602bba49a-kube-api-access-kx9c8\") pod \"37a77d25-20a9-48a5-b82c-bd7602bba49a\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " Apr 21 10:10:39.977036 kubelet[2567]: I0421 10:10:39.977037 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-nginx-config\") pod \"37a77d25-20a9-48a5-b82c-bd7602bba49a\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " Apr 21 10:10:39.977682 kubelet[2567]: I0421 10:10:39.977537 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-backend-key-pair\") pod \"37a77d25-20a9-48a5-b82c-bd7602bba49a\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " Apr 21 10:10:39.977682 kubelet[2567]: I0421 10:10:39.977555 2567 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-ca-bundle\") pod \"37a77d25-20a9-48a5-b82c-bd7602bba49a\" (UID: \"37a77d25-20a9-48a5-b82c-bd7602bba49a\") " Apr 21 10:10:39.980400 kubelet[2567]: I0421 10:10:39.980187 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "37a77d25-20a9-48a5-b82c-bd7602bba49a" (UID: "37a77d25-20a9-48a5-b82c-bd7602bba49a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:10:39.980699 kubelet[2567]: I0421 10:10:39.980681 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "37a77d25-20a9-48a5-b82c-bd7602bba49a" (UID: "37a77d25-20a9-48a5-b82c-bd7602bba49a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:10:39.983400 kubelet[2567]: I0421 10:10:39.983174 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "37a77d25-20a9-48a5-b82c-bd7602bba49a" (UID: "37a77d25-20a9-48a5-b82c-bd7602bba49a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:10:39.983664 kubelet[2567]: I0421 10:10:39.983650 2567 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a77d25-20a9-48a5-b82c-bd7602bba49a-kube-api-access-kx9c8" (OuterVolumeSpecName: "kube-api-access-kx9c8") pod "37a77d25-20a9-48a5-b82c-bd7602bba49a" (UID: "37a77d25-20a9-48a5-b82c-bd7602bba49a"). InnerVolumeSpecName "kube-api-access-kx9c8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:10:40.077978 kubelet[2567]: I0421 10:10:40.077941 2567 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kx9c8\" (UniqueName: \"kubernetes.io/projected/37a77d25-20a9-48a5-b82c-bd7602bba49a-kube-api-access-kx9c8\") on node \"ci-4081-3-7-a-afac96dda8\" DevicePath \"\"" Apr 21 10:10:40.078329 kubelet[2567]: I0421 10:10:40.078293 2567 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-nginx-config\") on node \"ci-4081-3-7-a-afac96dda8\" DevicePath \"\"" Apr 21 10:10:40.078329 kubelet[2567]: I0421 10:10:40.078305 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-backend-key-pair\") on node \"ci-4081-3-7-a-afac96dda8\" DevicePath \"\"" Apr 21 10:10:40.078329 kubelet[2567]: I0421 10:10:40.078313 2567 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37a77d25-20a9-48a5-b82c-bd7602bba49a-whisker-ca-bundle\") on node \"ci-4081-3-7-a-afac96dda8\" DevicePath \"\"" Apr 21 10:10:40.154894 systemd-networkd[1416]: cali673ceeec748: Link UP Apr 21 10:10:40.158981 systemd-networkd[1416]: cali673ceeec748: Gained carrier Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:39.972 [ERROR][3906] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:39.996 [INFO][3906] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0 goldmane-5b85766d88- calico-system 93fb5265-098d-445f-9cde-fcef06ca57d0 903 0 2026-04-21 10:10:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 goldmane-5b85766d88-8gxsz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali673ceeec748 [] [] }} ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:39.996 [INFO][3906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.073 [INFO][3953] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" HandleID="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.090 [INFO][3953] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" HandleID="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e75c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"goldmane-5b85766d88-8gxsz", "timestamp":"2026-04-21 10:10:40.073807197 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000540c60)} Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.091 [INFO][3953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.091 [INFO][3953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.091 [INFO][3953] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.100 [INFO][3953] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.107 [INFO][3953] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.112 [INFO][3953] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.115 [INFO][3953] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.117 [INFO][3953] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.117 [INFO][3953] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.119 [INFO][3953] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526 Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.125 [INFO][3953] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.136 [INFO][3953] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.1/26] block=192.168.117.0/26 handle="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.136 [INFO][3953] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.1/26] handle="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.137 [INFO][3953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:40.236631 containerd[1504]: 2026-04-21 10:10:40.137 [INFO][3953] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.1/26] IPv6=[] ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" HandleID="k8s-pod-network.90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.237214 containerd[1504]: 2026-04-21 10:10:40.140 [INFO][3906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"93fb5265-098d-445f-9cde-fcef06ca57d0", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"goldmane-5b85766d88-8gxsz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673ceeec748", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.237214 containerd[1504]: 2026-04-21 10:10:40.140 [INFO][3906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.1/32] ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.237214 containerd[1504]: 2026-04-21 10:10:40.140 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali673ceeec748 ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.237214 containerd[1504]: 2026-04-21 10:10:40.174 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.237214 containerd[1504]: 2026-04-21 10:10:40.190 [INFO][3906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"93fb5265-098d-445f-9cde-fcef06ca57d0", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526", Pod:"goldmane-5b85766d88-8gxsz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673ceeec748", MAC:"16:09:ff:a5:a4:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.237214 containerd[1504]: 2026-04-21 10:10:40.225 [INFO][3906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526" Namespace="calico-system" Pod="goldmane-5b85766d88-8gxsz" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:10:40.277233 systemd-networkd[1416]: cali849863cd68f: Link UP Apr 21 10:10:40.279257 systemd-networkd[1416]: cali849863cd68f: Gained carrier Apr 21 10:10:40.284206 containerd[1504]: time="2026-04-21T10:10:40.282836653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:40.284206 containerd[1504]: time="2026-04-21T10:10:40.282888551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:40.284206 containerd[1504]: time="2026-04-21T10:10:40.282898576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.284206 containerd[1504]: time="2026-04-21T10:10:40.282960598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:39.981 [ERROR][3897] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:39.999 [INFO][3897] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0 calico-apiserver-5c9bd8455- calico-system a753325e-f90f-4e6b-a9d0-a84ed1028eab 898 0 2026-04-21 10:10:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c9bd8455 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 calico-apiserver-5c9bd8455-plt4c eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali849863cd68f [] [] }} ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:39.999 [INFO][3897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.097 [INFO][3958] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" HandleID="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.111 [INFO][3958] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" HandleID="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"calico-apiserver-5c9bd8455-plt4c", "timestamp":"2026-04-21 10:10:40.097325621 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e26e0)} Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.112 [INFO][3958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.137 [INFO][3958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.137 [INFO][3958] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.213 [INFO][3958] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.232 [INFO][3958] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.249 [INFO][3958] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.252 [INFO][3958] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.254 [INFO][3958] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.254 [INFO][3958] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.257 [INFO][3958] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.260 [INFO][3958] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.266 [INFO][3958] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.2/26] block=192.168.117.0/26 handle="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.266 [INFO][3958] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.2/26] handle="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.310772 containerd[1504]: 2026-04-21 10:10:40.266 [INFO][3958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:40.311234 containerd[1504]: 2026-04-21 10:10:40.266 [INFO][3958] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.2/26] IPv6=[] ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" HandleID="k8s-pod-network.f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.311234 containerd[1504]: 2026-04-21 10:10:40.270 [INFO][3897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"a753325e-f90f-4e6b-a9d0-a84ed1028eab", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"calico-apiserver-5c9bd8455-plt4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali849863cd68f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.311234 containerd[1504]: 2026-04-21 10:10:40.270 [INFO][3897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.2/32] ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.311234 containerd[1504]: 2026-04-21 10:10:40.270 [INFO][3897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali849863cd68f ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.311234 containerd[1504]: 2026-04-21 10:10:40.287 [INFO][3897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.311234 containerd[1504]: 2026-04-21 10:10:40.288 [INFO][3897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"a753325e-f90f-4e6b-a9d0-a84ed1028eab", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe", Pod:"calico-apiserver-5c9bd8455-plt4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali849863cd68f", MAC:"a6:5b:eb:a1:69:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.311373 containerd[1504]: 2026-04-21 10:10:40.300 [INFO][3897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-plt4c" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:10:40.329714 systemd[1]: Started cri-containerd-90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526.scope - libcontainer container 90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526. Apr 21 10:10:40.347628 containerd[1504]: time="2026-04-21T10:10:40.346883802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:40.347628 containerd[1504]: time="2026-04-21T10:10:40.346928068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:40.347628 containerd[1504]: time="2026-04-21T10:10:40.346940146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.347628 containerd[1504]: time="2026-04-21T10:10:40.347019706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.390898 systemd[1]: Started cri-containerd-f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe.scope - libcontainer container f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe. Apr 21 10:10:40.412476 systemd-networkd[1416]: caliecedd847344: Link UP Apr 21 10:10:40.417819 systemd-networkd[1416]: caliecedd847344: Gained carrier Apr 21 10:10:40.428724 systemd[1]: run-netns-cni\x2df2769d55\x2de6cf\x2d9c66\x2d32d3\x2d81d785f911b6.mount: Deactivated successfully. Apr 21 10:10:40.428820 systemd[1]: run-netns-cni\x2d84981270\x2d5148\x2db058\x2da99b\x2d68c79c227e30.mount: Deactivated successfully. Apr 21 10:10:40.428892 systemd[1]: run-netns-cni\x2dcae9e9ce\x2d4034\x2d58eb\x2ded77\x2def7dfbe2af29.mount: Deactivated successfully. Apr 21 10:10:40.428945 systemd[1]: run-netns-cni\x2d7825d62b\x2df082\x2df194\x2d3332\x2d7ed3cf015e6d.mount: Deactivated successfully. Apr 21 10:10:40.429001 systemd[1]: var-lib-kubelet-pods-37a77d25\x2d20a9\x2d48a5\x2db82c\x2dbd7602bba49a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkx9c8.mount: Deactivated successfully. Apr 21 10:10:40.429059 systemd[1]: var-lib-kubelet-pods-37a77d25\x2d20a9\x2d48a5\x2db82c\x2dbd7602bba49a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.000 [ERROR][3917] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.014 [INFO][3917] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0 calico-apiserver-5c9bd8455- calico-system 4c0203ee-d1e8-4697-85ba-00626b1ad292 899 0 2026-04-21 10:10:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c9bd8455 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 calico-apiserver-5c9bd8455-nbw87 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliecedd847344 [] [] }} ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.014 [INFO][3917] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.160 [INFO][3970] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" HandleID="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.188 [INFO][3970] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" HandleID="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000491900), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"calico-apiserver-5c9bd8455-nbw87", "timestamp":"2026-04-21 10:10:40.160006174 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188c60)} Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.190 [INFO][3970] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.267 [INFO][3970] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.267 [INFO][3970] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.296 [INFO][3970] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.326 [INFO][3970] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.345 [INFO][3970] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.348 [INFO][3970] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.351 [INFO][3970] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.352 [INFO][3970] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.357 [INFO][3970] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.369 [INFO][3970] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.385 [INFO][3970] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.3/26] block=192.168.117.0/26 handle="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.385 [INFO][3970] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.3/26] handle="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.443700 containerd[1504]: 2026-04-21 10:10:40.385 [INFO][3970] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:40.444135 containerd[1504]: 2026-04-21 10:10:40.385 [INFO][3970] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.3/26] IPv6=[] ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" HandleID="k8s-pod-network.f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.444135 containerd[1504]: 2026-04-21 10:10:40.392 [INFO][3917] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"4c0203ee-d1e8-4697-85ba-00626b1ad292", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"calico-apiserver-5c9bd8455-nbw87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliecedd847344", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.444135 containerd[1504]: 2026-04-21 10:10:40.392 [INFO][3917] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.3/32] ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.444135 containerd[1504]: 2026-04-21 10:10:40.392 [INFO][3917] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecedd847344 ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.444135 containerd[1504]: 2026-04-21 10:10:40.415 [INFO][3917] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.444135 containerd[1504]: 2026-04-21 10:10:40.418 [INFO][3917] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"4c0203ee-d1e8-4697-85ba-00626b1ad292", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea", Pod:"calico-apiserver-5c9bd8455-nbw87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliecedd847344", MAC:"06:65:31:06:ab:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.444274 containerd[1504]: 2026-04-21 10:10:40.438 [INFO][3917] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea" Namespace="calico-system" Pod="calico-apiserver-5c9bd8455-nbw87" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:10:40.499635 systemd[1]: Removed slice kubepods-besteffort-pod37a77d25_20a9_48a5_b82c_bd7602bba49a.slice - libcontainer container kubepods-besteffort-pod37a77d25_20a9_48a5_b82c_bd7602bba49a.slice. Apr 21 10:10:40.508186 systemd-networkd[1416]: califf1654f9d40: Link UP Apr 21 10:10:40.508377 systemd-networkd[1416]: califf1654f9d40: Gained carrier Apr 21 10:10:40.534006 containerd[1504]: time="2026-04-21T10:10:40.522682186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:40.534006 containerd[1504]: time="2026-04-21T10:10:40.522721895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:40.534006 containerd[1504]: time="2026-04-21T10:10:40.522731930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.534006 containerd[1504]: time="2026-04-21T10:10:40.522810739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.536211 containerd[1504]: time="2026-04-21T10:10:40.536132309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8gxsz,Uid:93fb5265-098d-445f-9cde-fcef06ca57d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526\"" Apr 21 10:10:40.548335 containerd[1504]: time="2026-04-21T10:10:40.548171220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:10:40.553403 systemd[1]: run-containerd-runc-k8s.io-f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea-runc.YBGVuS.mount: Deactivated successfully. Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.156 [ERROR][3989] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.180 [INFO][3989] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0 calico-kube-controllers-9cb6dbfd9- calico-system b3626028-18b5-47bc-859d-14b0862e581b 901 0 2026-04-21 10:10:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9cb6dbfd9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 calico-kube-controllers-9cb6dbfd9-g7plt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califf1654f9d40 [] [] }} ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.181 [INFO][3989] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.228 [INFO][4053] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" HandleID="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.238 [INFO][4053] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" HandleID="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"calico-kube-controllers-9cb6dbfd9-g7plt", "timestamp":"2026-04-21 10:10:40.228377796 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036f8c0)} Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.238 [INFO][4053] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.386 [INFO][4053] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.387 [INFO][4053] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.398 [INFO][4053] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.427 [INFO][4053] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.450 [INFO][4053] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.452 [INFO][4053] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.457 [INFO][4053] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.457 [INFO][4053] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.459 [INFO][4053] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6 Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.464 [INFO][4053] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.489 [INFO][4053] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.4/26] block=192.168.117.0/26 handle="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.491 [INFO][4053] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.4/26] handle="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.561025 containerd[1504]: 2026-04-21 10:10:40.494 [INFO][4053] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:40.561444 containerd[1504]: 2026-04-21 10:10:40.495 [INFO][4053] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.4/26] IPv6=[] ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" HandleID="k8s-pod-network.63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.561444 containerd[1504]: 2026-04-21 10:10:40.503 [INFO][3989] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0", GenerateName:"calico-kube-controllers-9cb6dbfd9-", Namespace:"calico-system", SelfLink:"", UID:"b3626028-18b5-47bc-859d-14b0862e581b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb6dbfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"calico-kube-controllers-9cb6dbfd9-g7plt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf1654f9d40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.561444 containerd[1504]: 2026-04-21 10:10:40.503 [INFO][3989] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.4/32] ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.561444 containerd[1504]: 2026-04-21 10:10:40.503 [INFO][3989] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf1654f9d40 ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.561444 containerd[1504]: 2026-04-21 10:10:40.509 [INFO][3989] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.561444 containerd[1504]: 2026-04-21 10:10:40.513 [INFO][3989] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0", GenerateName:"calico-kube-controllers-9cb6dbfd9-", Namespace:"calico-system", SelfLink:"", UID:"b3626028-18b5-47bc-859d-14b0862e581b", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb6dbfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6", Pod:"calico-kube-controllers-9cb6dbfd9-g7plt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf1654f9d40", MAC:"9e:ac:42:87:6b:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.561614 containerd[1504]: 2026-04-21 10:10:40.543 [INFO][3989] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6" Namespace="calico-system" Pod="calico-kube-controllers-9cb6dbfd9-g7plt" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:10:40.562792 systemd[1]: Started cri-containerd-f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea.scope - libcontainer container f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea. Apr 21 10:10:40.587939 containerd[1504]: time="2026-04-21T10:10:40.587679077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:40.587939 containerd[1504]: time="2026-04-21T10:10:40.587752166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:40.588228 containerd[1504]: time="2026-04-21T10:10:40.587761961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.588228 containerd[1504]: time="2026-04-21T10:10:40.587905325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.628709 systemd-networkd[1416]: cali090703fc46b: Link UP Apr 21 10:10:40.631236 systemd-networkd[1416]: cali090703fc46b: Gained carrier Apr 21 10:10:40.678721 containerd[1504]: time="2026-04-21T10:10:40.677532254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-nbw87,Uid:4c0203ee-d1e8-4697-85ba-00626b1ad292,Namespace:calico-system,Attempt:1,} returns sandbox id \"f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea\"" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.076 [ERROR][3941] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.106 [INFO][3941] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0 coredns-674b8bbfcf- kube-system ce1fbcf9-9c29-4f48-8a24-9477c46cb787 902 0 2026-04-21 10:10:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 coredns-674b8bbfcf-d9sdn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali090703fc46b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.106 [INFO][3941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.222 [INFO][4029] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" HandleID="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.241 [INFO][4029] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" HandleID="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"coredns-674b8bbfcf-d9sdn", "timestamp":"2026-04-21 10:10:40.222802931 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000114840)} Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.241 [INFO][4029] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.493 [INFO][4029] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.494 [INFO][4029] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.511 [INFO][4029] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.529 [INFO][4029] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.556 [INFO][4029] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.559 [INFO][4029] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.561 [INFO][4029] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.562 [INFO][4029] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.564 [INFO][4029] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53 Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.572 [INFO][4029] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.580 [INFO][4029] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.5/26] block=192.168.117.0/26 handle="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.581 [INFO][4029] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.5/26] handle="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.585 [INFO][4029] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:40.685639 containerd[1504]: 2026-04-21 10:10:40.585 [INFO][4029] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.5/26] IPv6=[] ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" HandleID="k8s-pod-network.c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.686076 containerd[1504]: 2026-04-21 10:10:40.625 [INFO][3941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce1fbcf9-9c29-4f48-8a24-9477c46cb787", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"coredns-674b8bbfcf-d9sdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali090703fc46b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.686076 containerd[1504]: 2026-04-21 10:10:40.626 [INFO][3941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.5/32] ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.686076 containerd[1504]: 2026-04-21 10:10:40.626 [INFO][3941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali090703fc46b ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.686076 containerd[1504]: 2026-04-21 10:10:40.634 [INFO][3941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.686076 containerd[1504]: 2026-04-21 10:10:40.647 [INFO][3941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce1fbcf9-9c29-4f48-8a24-9477c46cb787", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53", Pod:"coredns-674b8bbfcf-d9sdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali090703fc46b", MAC:"82:86:47:29:00:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.686214 containerd[1504]: 2026-04-21 10:10:40.683 [INFO][3941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53" Namespace="kube-system" Pod="coredns-674b8bbfcf-d9sdn" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:10:40.692373 containerd[1504]: time="2026-04-21T10:10:40.692333429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9bd8455-plt4c,Uid:a753325e-f90f-4e6b-a9d0-a84ed1028eab,Namespace:calico-system,Attempt:1,} returns sandbox id \"f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe\"" Apr 21 10:10:40.702220 kubelet[2567]: I0421 10:10:40.701176 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:40.715618 systemd[1]: Started cri-containerd-63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6.scope - libcontainer container 63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6. Apr 21 10:10:40.743147 systemd-networkd[1416]: cali9b12bb0e5da: Link UP Apr 21 10:10:40.744279 systemd-networkd[1416]: cali9b12bb0e5da: Gained carrier Apr 21 10:10:40.763177 containerd[1504]: time="2026-04-21T10:10:40.763018143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:40.763177 containerd[1504]: time="2026-04-21T10:10:40.763070973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:40.763177 containerd[1504]: time="2026-04-21T10:10:40.763099345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.763740 containerd[1504]: time="2026-04-21T10:10:40.763537431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.781994 systemd[1]: Started cri-containerd-c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53.scope - libcontainer container c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53. Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.044 [ERROR][3929] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.066 [INFO][3929] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0 csi-node-driver- calico-system a5cda479-2770-45d5-b204-a8ebaf013eb6 900 0 2026-04-21 10:10:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 csi-node-driver-7f724 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9b12bb0e5da [] [] }} ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.067 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.225 [INFO][4010] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" HandleID="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.244 [INFO][4010] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" HandleID="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005dc350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"csi-node-driver-7f724", "timestamp":"2026-04-21 10:10:40.225832346 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001cb8c0)} Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.244 [INFO][4010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.582 [INFO][4010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.582 [INFO][4010] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.606 [INFO][4010] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.633 [INFO][4010] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.662 [INFO][4010] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.667 [INFO][4010] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.680 [INFO][4010] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.680 [INFO][4010] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.685 [INFO][4010] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9 Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.695 [INFO][4010] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.710 [INFO][4010] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.6/26] block=192.168.117.0/26 handle="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.710 [INFO][4010] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.6/26] handle="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.710 [INFO][4010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:40.786222 containerd[1504]: 2026-04-21 10:10:40.710 [INFO][4010] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.6/26] IPv6=[] ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" HandleID="k8s-pod-network.e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.787554 containerd[1504]: 2026-04-21 10:10:40.717 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5cda479-2770-45d5-b204-a8ebaf013eb6", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"csi-node-driver-7f724", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b12bb0e5da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.787554 containerd[1504]: 2026-04-21 10:10:40.717 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.6/32] ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.787554 containerd[1504]: 2026-04-21 10:10:40.717 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b12bb0e5da ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.787554 containerd[1504]: 2026-04-21 10:10:40.745 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.787554 containerd[1504]: 2026-04-21 10:10:40.746 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5cda479-2770-45d5-b204-a8ebaf013eb6", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9", Pod:"csi-node-driver-7f724", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b12bb0e5da", MAC:"3e:11:ef:ba:07:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:40.787554 containerd[1504]: 2026-04-21 10:10:40.771 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9" Namespace="calico-system" Pod="csi-node-driver-7f724" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:10:40.803063 systemd[1]: Created slice kubepods-besteffort-pod14664470_e9ca_44e9_a418_9bd20e3901b4.slice - libcontainer container kubepods-besteffort-pod14664470_e9ca_44e9_a418_9bd20e3901b4.slice. Apr 21 10:10:40.839632 containerd[1504]: time="2026-04-21T10:10:40.839299425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:40.839632 containerd[1504]: time="2026-04-21T10:10:40.839355329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:40.839632 containerd[1504]: time="2026-04-21T10:10:40.839365714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.839632 containerd[1504]: time="2026-04-21T10:10:40.839437351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:40.871707 systemd[1]: Started cri-containerd-e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9.scope - libcontainer container e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9. Apr 21 10:10:40.874322 containerd[1504]: time="2026-04-21T10:10:40.874289430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d9sdn,Uid:ce1fbcf9-9c29-4f48-8a24-9477c46cb787,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53\"" Apr 21 10:10:40.880711 containerd[1504]: time="2026-04-21T10:10:40.880657102Z" level=info msg="CreateContainer within sandbox \"c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:10:40.886276 kubelet[2567]: I0421 10:10:40.886130 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/14664470-e9ca-44e9-a418-9bd20e3901b4-nginx-config\") pod \"whisker-5c7988f64c-zv8dm\" (UID: \"14664470-e9ca-44e9-a418-9bd20e3901b4\") " pod="calico-system/whisker-5c7988f64c-zv8dm" Apr 21 10:10:40.886389 kubelet[2567]: I0421 10:10:40.886287 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14664470-e9ca-44e9-a418-9bd20e3901b4-whisker-backend-key-pair\") pod \"whisker-5c7988f64c-zv8dm\" (UID: \"14664470-e9ca-44e9-a418-9bd20e3901b4\") " pod="calico-system/whisker-5c7988f64c-zv8dm" Apr 21 10:10:40.886411 kubelet[2567]: I0421 10:10:40.886398 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvr48\" (UniqueName: \"kubernetes.io/projected/14664470-e9ca-44e9-a418-9bd20e3901b4-kube-api-access-dvr48\") pod \"whisker-5c7988f64c-zv8dm\" (UID: \"14664470-e9ca-44e9-a418-9bd20e3901b4\") " pod="calico-system/whisker-5c7988f64c-zv8dm" Apr 21 10:10:40.886427 kubelet[2567]: I0421 10:10:40.886412 2567 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14664470-e9ca-44e9-a418-9bd20e3901b4-whisker-ca-bundle\") pod \"whisker-5c7988f64c-zv8dm\" (UID: \"14664470-e9ca-44e9-a418-9bd20e3901b4\") " pod="calico-system/whisker-5c7988f64c-zv8dm" Apr 21 10:10:40.902750 containerd[1504]: time="2026-04-21T10:10:40.902044175Z" level=info msg="CreateContainer within sandbox \"c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d31119074957e10bc8733d635e10c9e56f42addfaf0e53e16e6a9381b6d4a38\"" Apr 21 10:10:40.908355 containerd[1504]: time="2026-04-21T10:10:40.907987561Z" level=info msg="StartContainer for \"2d31119074957e10bc8733d635e10c9e56f42addfaf0e53e16e6a9381b6d4a38\"" Apr 21 10:10:40.925504 containerd[1504]: time="2026-04-21T10:10:40.925366444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7f724,Uid:a5cda479-2770-45d5-b204-a8ebaf013eb6,Namespace:calico-system,Attempt:1,} returns sandbox id \"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9\"" Apr 21 10:10:40.946433 containerd[1504]: time="2026-04-21T10:10:40.946313970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9cb6dbfd9-g7plt,Uid:b3626028-18b5-47bc-859d-14b0862e581b,Namespace:calico-system,Attempt:1,} returns sandbox id \"63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6\"" Apr 21 10:10:40.955614 kernel: calico-node[4112]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:10:40.956817 systemd[1]: Started cri-containerd-2d31119074957e10bc8733d635e10c9e56f42addfaf0e53e16e6a9381b6d4a38.scope - libcontainer container 2d31119074957e10bc8733d635e10c9e56f42addfaf0e53e16e6a9381b6d4a38. Apr 21 10:10:40.992534 containerd[1504]: time="2026-04-21T10:10:40.992451531Z" level=info msg="StartContainer for \"2d31119074957e10bc8733d635e10c9e56f42addfaf0e53e16e6a9381b6d4a38\" returns successfully" Apr 21 10:10:41.118507 containerd[1504]: time="2026-04-21T10:10:41.117365758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c7988f64c-zv8dm,Uid:14664470-e9ca-44e9-a418-9bd20e3901b4,Namespace:calico-system,Attempt:0,}" Apr 21 10:10:41.295643 systemd-networkd[1416]: cali12aab21fd9a: Link UP Apr 21 10:10:41.295823 systemd-networkd[1416]: cali12aab21fd9a: Gained carrier Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.214 [INFO][4463] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0 whisker-5c7988f64c- calico-system 14664470-e9ca-44e9-a418-9bd20e3901b4 940 0 2026-04-21 10:10:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c7988f64c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 whisker-5c7988f64c-zv8dm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali12aab21fd9a [] [] }} ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.214 [INFO][4463] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.253 [INFO][4479] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" HandleID="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.260 [INFO][4479] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" HandleID="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"whisker-5c7988f64c-zv8dm", "timestamp":"2026-04-21 10:10:41.253279415 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001942c0)} Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.260 [INFO][4479] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.260 [INFO][4479] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.260 [INFO][4479] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.262 [INFO][4479] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.267 [INFO][4479] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.277 [INFO][4479] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.278 [INFO][4479] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.280 [INFO][4479] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.280 [INFO][4479] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.281 [INFO][4479] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63 Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.284 [INFO][4479] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.289 [INFO][4479] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.7/26] block=192.168.117.0/26 handle="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.289 [INFO][4479] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.7/26] handle="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.289 [INFO][4479] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:41.312058 containerd[1504]: 2026-04-21 10:10:41.289 [INFO][4479] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.7/26] IPv6=[] ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" HandleID="k8s-pod-network.6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.313117 containerd[1504]: 2026-04-21 10:10:41.291 [INFO][4463] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0", GenerateName:"whisker-5c7988f64c-", Namespace:"calico-system", SelfLink:"", UID:"14664470-e9ca-44e9-a418-9bd20e3901b4", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c7988f64c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"whisker-5c7988f64c-zv8dm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12aab21fd9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:41.313117 containerd[1504]: 2026-04-21 10:10:41.291 [INFO][4463] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.7/32] ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.313117 containerd[1504]: 2026-04-21 10:10:41.291 [INFO][4463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12aab21fd9a ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.313117 containerd[1504]: 2026-04-21 10:10:41.296 [INFO][4463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.313117 containerd[1504]: 2026-04-21 10:10:41.297 [INFO][4463] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0", GenerateName:"whisker-5c7988f64c-", Namespace:"calico-system", SelfLink:"", UID:"14664470-e9ca-44e9-a418-9bd20e3901b4", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c7988f64c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63", Pod:"whisker-5c7988f64c-zv8dm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali12aab21fd9a", MAC:"ea:60:18:d9:06:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:41.313117 containerd[1504]: 2026-04-21 10:10:41.306 [INFO][4463] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63" Namespace="calico-system" Pod="whisker-5c7988f64c-zv8dm" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--5c7988f64c--zv8dm-eth0" Apr 21 10:10:41.382399 containerd[1504]: time="2026-04-21T10:10:41.381796187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:41.382399 containerd[1504]: time="2026-04-21T10:10:41.381833433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:41.382399 containerd[1504]: time="2026-04-21T10:10:41.381844559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:41.382399 containerd[1504]: time="2026-04-21T10:10:41.381925070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:41.401716 systemd[1]: Started cri-containerd-6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63.scope - libcontainer container 6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63. Apr 21 10:10:41.465546 containerd[1504]: time="2026-04-21T10:10:41.465191007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c7988f64c-zv8dm,Uid:14664470-e9ca-44e9-a418-9bd20e3901b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63\"" Apr 21 10:10:41.479376 systemd-networkd[1416]: vxlan.calico: Link UP Apr 21 10:10:41.479383 systemd-networkd[1416]: vxlan.calico: Gained carrier Apr 21 10:10:41.667790 systemd-networkd[1416]: cali849863cd68f: Gained IPv6LL Apr 21 10:10:41.722832 kubelet[2567]: I0421 10:10:41.722765 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d9sdn" podStartSLOduration=26.722745742 podStartE2EDuration="26.722745742s" podCreationTimestamp="2026-04-21 10:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:41.722472954 +0000 UTC m=+31.331983028" watchObservedRunningTime="2026-04-21 10:10:41.722745742 +0000 UTC m=+31.332255806" Apr 21 10:10:41.795725 systemd-networkd[1416]: cali673ceeec748: Gained IPv6LL Apr 21 10:10:41.988228 systemd-networkd[1416]: cali9b12bb0e5da: Gained IPv6LL Apr 21 10:10:42.243881 systemd-networkd[1416]: cali090703fc46b: Gained IPv6LL Apr 21 10:10:42.308186 systemd-networkd[1416]: califf1654f9d40: Gained IPv6LL Apr 21 10:10:42.373312 systemd-networkd[1416]: caliecedd847344: Gained IPv6LL Apr 21 10:10:42.475605 kubelet[2567]: I0421 10:10:42.474652 2567 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a77d25-20a9-48a5-b82c-bd7602bba49a" path="/var/lib/kubelet/pods/37a77d25-20a9-48a5-b82c-bd7602bba49a/volumes" Apr 21 10:10:42.692872 systemd-networkd[1416]: cali12aab21fd9a: Gained IPv6LL Apr 21 10:10:42.948078 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Apr 21 10:10:43.828705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004764441.mount: Deactivated successfully. Apr 21 10:10:44.143610 containerd[1504]: time="2026-04-21T10:10:44.143545721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:44.144797 containerd[1504]: time="2026-04-21T10:10:44.144769013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:10:44.145905 containerd[1504]: time="2026-04-21T10:10:44.145710072Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:44.147623 containerd[1504]: time="2026-04-21T10:10:44.147591349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:44.148247 containerd[1504]: time="2026-04-21T10:10:44.148218660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.60002059s" Apr 21 10:10:44.148247 containerd[1504]: time="2026-04-21T10:10:44.148243787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:10:44.149486 containerd[1504]: time="2026-04-21T10:10:44.149465347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:10:44.151922 containerd[1504]: time="2026-04-21T10:10:44.151899602Z" level=info msg="CreateContainer within sandbox \"90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:10:44.169477 containerd[1504]: time="2026-04-21T10:10:44.169449828Z" level=info msg="CreateContainer within sandbox \"90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d81650335ccd3a22950aa403172bfd65e10e6b25b6aa41f70c9563e1b11d5436\"" Apr 21 10:10:44.169973 containerd[1504]: time="2026-04-21T10:10:44.169801105Z" level=info msg="StartContainer for \"d81650335ccd3a22950aa403172bfd65e10e6b25b6aa41f70c9563e1b11d5436\"" Apr 21 10:10:44.202721 systemd[1]: Started cri-containerd-d81650335ccd3a22950aa403172bfd65e10e6b25b6aa41f70c9563e1b11d5436.scope - libcontainer container d81650335ccd3a22950aa403172bfd65e10e6b25b6aa41f70c9563e1b11d5436. Apr 21 10:10:44.249973 containerd[1504]: time="2026-04-21T10:10:44.249934235Z" level=info msg="StartContainer for \"d81650335ccd3a22950aa403172bfd65e10e6b25b6aa41f70c9563e1b11d5436\" returns successfully" Apr 21 10:10:45.730674 kubelet[2567]: I0421 10:10:45.730613 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:48.681434 containerd[1504]: time="2026-04-21T10:10:48.680761781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:48.681434 containerd[1504]: time="2026-04-21T10:10:48.681401050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:10:48.682231 containerd[1504]: time="2026-04-21T10:10:48.682199306Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:48.683959 containerd[1504]: time="2026-04-21T10:10:48.683932933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:48.684618 containerd[1504]: time="2026-04-21T10:10:48.684505050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.535014515s" Apr 21 10:10:48.684618 containerd[1504]: time="2026-04-21T10:10:48.684528896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:10:48.685807 containerd[1504]: time="2026-04-21T10:10:48.685226872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:10:48.687823 containerd[1504]: time="2026-04-21T10:10:48.687789562Z" level=info msg="CreateContainer within sandbox \"f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:10:48.705975 containerd[1504]: time="2026-04-21T10:10:48.705932478Z" level=info msg="CreateContainer within sandbox \"f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d940466f4b96f6536bbb4b274aab09da3f471123490946741c4f9fc4f7a52314\"" Apr 21 10:10:48.706744 containerd[1504]: time="2026-04-21T10:10:48.706674099Z" level=info msg="StartContainer for \"d940466f4b96f6536bbb4b274aab09da3f471123490946741c4f9fc4f7a52314\"" Apr 21 10:10:48.737756 systemd[1]: Started cri-containerd-d940466f4b96f6536bbb4b274aab09da3f471123490946741c4f9fc4f7a52314.scope - libcontainer container d940466f4b96f6536bbb4b274aab09da3f471123490946741c4f9fc4f7a52314. Apr 21 10:10:48.773910 containerd[1504]: time="2026-04-21T10:10:48.773851342Z" level=info msg="StartContainer for \"d940466f4b96f6536bbb4b274aab09da3f471123490946741c4f9fc4f7a52314\" returns successfully" Apr 21 10:10:49.159348 containerd[1504]: time="2026-04-21T10:10:49.159292042Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:49.164486 containerd[1504]: time="2026-04-21T10:10:49.163526547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:10:49.165314 kubelet[2567]: I0421 10:10:49.165226 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:49.171130 containerd[1504]: time="2026-04-21T10:10:49.171011275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 485.764093ms" Apr 21 10:10:49.171130 containerd[1504]: time="2026-04-21T10:10:49.171037985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:10:49.192310 containerd[1504]: time="2026-04-21T10:10:49.192247018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:10:49.204800 containerd[1504]: time="2026-04-21T10:10:49.204748312Z" level=info msg="CreateContainer within sandbox \"f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:10:49.231095 containerd[1504]: time="2026-04-21T10:10:49.231056799Z" level=info msg="CreateContainer within sandbox \"f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d071378fa883e760124c5b6539bc62fe8936b771cedee0b949b0c07cfc02a196\"" Apr 21 10:10:49.232706 containerd[1504]: time="2026-04-21T10:10:49.232685309Z" level=info msg="StartContainer for \"d071378fa883e760124c5b6539bc62fe8936b771cedee0b949b0c07cfc02a196\"" Apr 21 10:10:49.264490 systemd[1]: Started cri-containerd-d071378fa883e760124c5b6539bc62fe8936b771cedee0b949b0c07cfc02a196.scope - libcontainer container d071378fa883e760124c5b6539bc62fe8936b771cedee0b949b0c07cfc02a196. Apr 21 10:10:49.312015 containerd[1504]: time="2026-04-21T10:10:49.311977777Z" level=info msg="StartContainer for \"d071378fa883e760124c5b6539bc62fe8936b771cedee0b949b0c07cfc02a196\" returns successfully" Apr 21 10:10:49.354560 kubelet[2567]: I0421 10:10:49.354331 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-8gxsz" podStartSLOduration=21.746959416 podStartE2EDuration="25.354317729s" podCreationTimestamp="2026-04-21 10:10:24 +0000 UTC" firstStartedPulling="2026-04-21 10:10:40.541670209 +0000 UTC m=+30.151180283" lastFinishedPulling="2026-04-21 10:10:44.149028532 +0000 UTC m=+33.758538596" observedRunningTime="2026-04-21 10:10:44.767749697 +0000 UTC m=+34.377259771" watchObservedRunningTime="2026-04-21 10:10:49.354317729 +0000 UTC m=+38.963827803" Apr 21 10:10:49.469921 containerd[1504]: time="2026-04-21T10:10:49.468735555Z" level=info msg="StopPodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\"" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.521 [INFO][4852] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.522 [INFO][4852] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" iface="eth0" netns="/var/run/netns/cni-3a520b0d-89c4-1318-be27-8f9a905dbc32" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.522 [INFO][4852] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" iface="eth0" netns="/var/run/netns/cni-3a520b0d-89c4-1318-be27-8f9a905dbc32" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.522 [INFO][4852] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" iface="eth0" netns="/var/run/netns/cni-3a520b0d-89c4-1318-be27-8f9a905dbc32" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.522 [INFO][4852] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.522 [INFO][4852] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.551 [INFO][4859] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.551 [INFO][4859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.552 [INFO][4859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.560 [WARNING][4859] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.560 [INFO][4859] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.564 [INFO][4859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:49.571225 containerd[1504]: 2026-04-21 10:10:49.568 [INFO][4852] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:10:49.573081 containerd[1504]: time="2026-04-21T10:10:49.572625666Z" level=info msg="TearDown network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" successfully" Apr 21 10:10:49.573081 containerd[1504]: time="2026-04-21T10:10:49.572651745Z" level=info msg="StopPodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" returns successfully" Apr 21 10:10:49.573261 containerd[1504]: time="2026-04-21T10:10:49.573225955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr8nw,Uid:89203aa1-f58e-43c1-aeac-389c8c4e354d,Namespace:kube-system,Attempt:1,}" Apr 21 10:10:49.695947 systemd-networkd[1416]: caliad298dc0ef8: Link UP Apr 21 10:10:49.696142 systemd-networkd[1416]: caliad298dc0ef8: Gained carrier Apr 21 10:10:49.701997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437148983.mount: Deactivated successfully. Apr 21 10:10:49.702083 systemd[1]: run-netns-cni\x2d3a520b0d\x2d89c4\x2d1318\x2dbe27\x2d8f9a905dbc32.mount: Deactivated successfully. Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.630 [INFO][4866] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0 coredns-674b8bbfcf- kube-system 89203aa1-f58e-43c1-aeac-389c8c4e354d 995 0 2026-04-21 10:10:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-a-afac96dda8 coredns-674b8bbfcf-sr8nw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliad298dc0ef8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.630 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.656 [INFO][4877] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" HandleID="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.664 [INFO][4877] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" HandleID="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef4b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-a-afac96dda8", "pod":"coredns-674b8bbfcf-sr8nw", "timestamp":"2026-04-21 10:10:49.656702484 +0000 UTC"}, Hostname:"ci-4081-3-7-a-afac96dda8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000401080)} Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.664 [INFO][4877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.664 [INFO][4877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.664 [INFO][4877] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-a-afac96dda8' Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.667 [INFO][4877] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.670 [INFO][4877] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.675 [INFO][4877] ipam/ipam.go 526: Trying affinity for 192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.676 [INFO][4877] ipam/ipam.go 160: Attempting to load block cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.678 [INFO][4877] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.117.0/26 host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.678 [INFO][4877] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.117.0/26 handle="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.680 [INFO][4877] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.683 [INFO][4877] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.117.0/26 handle="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.688 [INFO][4877] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.117.8/26] block=192.168.117.0/26 handle="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.688 [INFO][4877] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.117.8/26] handle="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" host="ci-4081-3-7-a-afac96dda8" Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.688 [INFO][4877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:10:49.723602 containerd[1504]: 2026-04-21 10:10:49.688 [INFO][4877] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.117.8/26] IPv6=[] ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" HandleID="k8s-pod-network.8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.724549 containerd[1504]: 2026-04-21 10:10:49.691 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89203aa1-f58e-43c1-aeac-389c8c4e354d", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"", Pod:"coredns-674b8bbfcf-sr8nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad298dc0ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:49.724549 containerd[1504]: 2026-04-21 10:10:49.691 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.8/32] ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.724549 containerd[1504]: 2026-04-21 10:10:49.691 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad298dc0ef8 ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.724549 containerd[1504]: 2026-04-21 10:10:49.697 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.724549 containerd[1504]: 2026-04-21 10:10:49.697 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89203aa1-f58e-43c1-aeac-389c8c4e354d", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d", Pod:"coredns-674b8bbfcf-sr8nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad298dc0ef8", MAC:"f6:17:d6:30:8f:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:10:49.724804 containerd[1504]: 2026-04-21 10:10:49.713 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d" Namespace="kube-system" Pod="coredns-674b8bbfcf-sr8nw" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:10:49.767961 containerd[1504]: time="2026-04-21T10:10:49.766801128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:10:49.767961 containerd[1504]: time="2026-04-21T10:10:49.766849721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:10:49.767961 containerd[1504]: time="2026-04-21T10:10:49.766868749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:49.767961 containerd[1504]: time="2026-04-21T10:10:49.766951343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:10:49.799716 systemd[1]: Started cri-containerd-8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d.scope - libcontainer container 8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d. Apr 21 10:10:49.831591 kubelet[2567]: I0421 10:10:49.829550 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5c9bd8455-nbw87" podStartSLOduration=17.826663792 podStartE2EDuration="25.829529613s" podCreationTimestamp="2026-04-21 10:10:24 +0000 UTC" firstStartedPulling="2026-04-21 10:10:40.682293881 +0000 UTC m=+30.291803955" lastFinishedPulling="2026-04-21 10:10:48.685159702 +0000 UTC m=+38.294669776" observedRunningTime="2026-04-21 10:10:49.784848364 +0000 UTC m=+39.394358438" watchObservedRunningTime="2026-04-21 10:10:49.829529613 +0000 UTC m=+39.439039687" Apr 21 10:10:49.831591 kubelet[2567]: I0421 10:10:49.829662 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5c9bd8455-plt4c" podStartSLOduration=17.340933619 podStartE2EDuration="25.829657755s" podCreationTimestamp="2026-04-21 10:10:24 +0000 UTC" firstStartedPulling="2026-04-21 10:10:40.700776126 +0000 UTC m=+30.310286200" lastFinishedPulling="2026-04-21 10:10:49.189500221 +0000 UTC m=+38.799010336" observedRunningTime="2026-04-21 10:10:49.829281472 +0000 UTC m=+39.438791536" watchObservedRunningTime="2026-04-21 10:10:49.829657755 +0000 UTC m=+39.439167829" Apr 21 10:10:49.857383 containerd[1504]: time="2026-04-21T10:10:49.857342383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sr8nw,Uid:89203aa1-f58e-43c1-aeac-389c8c4e354d,Namespace:kube-system,Attempt:1,} returns sandbox id \"8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d\"" Apr 21 10:10:49.864738 kubelet[2567]: I0421 10:10:49.863140 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:49.872586 containerd[1504]: time="2026-04-21T10:10:49.871460521Z" level=info msg="CreateContainer within sandbox \"8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:10:49.894478 containerd[1504]: time="2026-04-21T10:10:49.894431015Z" level=info msg="CreateContainer within sandbox \"8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18d2f017e4de321e83cc97293b2b74bbb6f50244bf9cffcae78b56dfbf1a75e3\"" Apr 21 10:10:49.896744 containerd[1504]: time="2026-04-21T10:10:49.896711072Z" level=info msg="StartContainer for \"18d2f017e4de321e83cc97293b2b74bbb6f50244bf9cffcae78b56dfbf1a75e3\"" Apr 21 10:10:49.942087 systemd[1]: Started cri-containerd-18d2f017e4de321e83cc97293b2b74bbb6f50244bf9cffcae78b56dfbf1a75e3.scope - libcontainer container 18d2f017e4de321e83cc97293b2b74bbb6f50244bf9cffcae78b56dfbf1a75e3. Apr 21 10:10:49.978413 containerd[1504]: time="2026-04-21T10:10:49.977217690Z" level=info msg="StartContainer for \"18d2f017e4de321e83cc97293b2b74bbb6f50244bf9cffcae78b56dfbf1a75e3\" returns successfully" Apr 21 10:10:50.761657 kubelet[2567]: I0421 10:10:50.761598 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:10:51.028861 containerd[1504]: time="2026-04-21T10:10:51.028746914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:51.030683 containerd[1504]: time="2026-04-21T10:10:51.030650487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:10:51.031719 containerd[1504]: time="2026-04-21T10:10:51.031696335Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:51.033709 containerd[1504]: time="2026-04-21T10:10:51.033675402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:51.034188 containerd[1504]: time="2026-04-21T10:10:51.034079487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.841776716s" Apr 21 10:10:51.034188 containerd[1504]: time="2026-04-21T10:10:51.034101230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:10:51.035470 containerd[1504]: time="2026-04-21T10:10:51.035449911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:10:51.037827 containerd[1504]: time="2026-04-21T10:10:51.037795657Z" level=info msg="CreateContainer within sandbox \"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:10:51.059910 containerd[1504]: time="2026-04-21T10:10:51.059870715Z" level=info msg="CreateContainer within sandbox \"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4f52635d50ab0a0c0adcd756974dce53f417a5e2fc2b65ad499b6c56ce27e5a1\"" Apr 21 10:10:51.060512 containerd[1504]: time="2026-04-21T10:10:51.060492877Z" level=info msg="StartContainer for \"4f52635d50ab0a0c0adcd756974dce53f417a5e2fc2b65ad499b6c56ce27e5a1\"" Apr 21 10:10:51.078976 systemd-networkd[1416]: caliad298dc0ef8: Gained IPv6LL Apr 21 10:10:51.111698 systemd[1]: Started cri-containerd-4f52635d50ab0a0c0adcd756974dce53f417a5e2fc2b65ad499b6c56ce27e5a1.scope - libcontainer container 4f52635d50ab0a0c0adcd756974dce53f417a5e2fc2b65ad499b6c56ce27e5a1. Apr 21 10:10:51.139481 containerd[1504]: time="2026-04-21T10:10:51.139326509Z" level=info msg="StartContainer for \"4f52635d50ab0a0c0adcd756974dce53f417a5e2fc2b65ad499b6c56ce27e5a1\" returns successfully" Apr 21 10:10:51.793433 kubelet[2567]: I0421 10:10:51.793304 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sr8nw" podStartSLOduration=36.793279712 podStartE2EDuration="36.793279712s" podCreationTimestamp="2026-04-21 10:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:10:50.776999095 +0000 UTC m=+40.386509199" watchObservedRunningTime="2026-04-21 10:10:51.793279712 +0000 UTC m=+41.402789826" Apr 21 10:10:53.343781 containerd[1504]: time="2026-04-21T10:10:53.343728896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:53.345045 containerd[1504]: time="2026-04-21T10:10:53.344956866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:10:53.346177 containerd[1504]: time="2026-04-21T10:10:53.346109845Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:53.348273 containerd[1504]: time="2026-04-21T10:10:53.348226598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:53.348712 containerd[1504]: time="2026-04-21T10:10:53.348690343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.313216556s" Apr 21 10:10:53.348749 containerd[1504]: time="2026-04-21T10:10:53.348715701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:10:53.351584 containerd[1504]: time="2026-04-21T10:10:53.349951083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:10:53.365724 containerd[1504]: time="2026-04-21T10:10:53.365690729Z" level=info msg="CreateContainer within sandbox \"63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:10:53.378528 containerd[1504]: time="2026-04-21T10:10:53.378471348Z" level=info msg="CreateContainer within sandbox \"63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"43a35eaba390a98d248b39a3bb0ef077000eb018fef0a0af4ccad17d6467d72e\"" Apr 21 10:10:53.379869 containerd[1504]: time="2026-04-21T10:10:53.379118428Z" level=info msg="StartContainer for \"43a35eaba390a98d248b39a3bb0ef077000eb018fef0a0af4ccad17d6467d72e\"" Apr 21 10:10:53.403686 systemd[1]: Started cri-containerd-43a35eaba390a98d248b39a3bb0ef077000eb018fef0a0af4ccad17d6467d72e.scope - libcontainer container 43a35eaba390a98d248b39a3bb0ef077000eb018fef0a0af4ccad17d6467d72e. Apr 21 10:10:53.442874 containerd[1504]: time="2026-04-21T10:10:53.442838111Z" level=info msg="StartContainer for \"43a35eaba390a98d248b39a3bb0ef077000eb018fef0a0af4ccad17d6467d72e\" returns successfully" Apr 21 10:10:53.787993 kubelet[2567]: I0421 10:10:53.787638 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9cb6dbfd9-g7plt" podStartSLOduration=16.385754158 podStartE2EDuration="28.787614759s" podCreationTimestamp="2026-04-21 10:10:25 +0000 UTC" firstStartedPulling="2026-04-21 10:10:40.94742908 +0000 UTC m=+30.556939154" lastFinishedPulling="2026-04-21 10:10:53.349289691 +0000 UTC m=+42.958799755" observedRunningTime="2026-04-21 10:10:53.787427269 +0000 UTC m=+43.396937383" watchObservedRunningTime="2026-04-21 10:10:53.787614759 +0000 UTC m=+43.397124873" Apr 21 10:10:55.285739 containerd[1504]: time="2026-04-21T10:10:55.285681936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:55.287913 containerd[1504]: time="2026-04-21T10:10:55.287887804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:10:55.289471 containerd[1504]: time="2026-04-21T10:10:55.289311589Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:55.295592 containerd[1504]: time="2026-04-21T10:10:55.293474640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:55.295592 containerd[1504]: time="2026-04-21T10:10:55.294048000Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.94407922s" Apr 21 10:10:55.295592 containerd[1504]: time="2026-04-21T10:10:55.294067730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:10:55.296726 containerd[1504]: time="2026-04-21T10:10:55.296711032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:10:55.299105 containerd[1504]: time="2026-04-21T10:10:55.299087876Z" level=info msg="CreateContainer within sandbox \"6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:10:55.311775 containerd[1504]: time="2026-04-21T10:10:55.311750061Z" level=info msg="CreateContainer within sandbox \"6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b41767f7a6f8c03becd48ebe0bda8e3b7adcd04cacd3f52904ea458a9b1eb477\"" Apr 21 10:10:55.312969 containerd[1504]: time="2026-04-21T10:10:55.312546956Z" level=info msg="StartContainer for \"b41767f7a6f8c03becd48ebe0bda8e3b7adcd04cacd3f52904ea458a9b1eb477\"" Apr 21 10:10:55.357275 systemd[1]: Started cri-containerd-b41767f7a6f8c03becd48ebe0bda8e3b7adcd04cacd3f52904ea458a9b1eb477.scope - libcontainer container b41767f7a6f8c03becd48ebe0bda8e3b7adcd04cacd3f52904ea458a9b1eb477. Apr 21 10:10:55.413213 containerd[1504]: time="2026-04-21T10:10:55.413181018Z" level=info msg="StartContainer for \"b41767f7a6f8c03becd48ebe0bda8e3b7adcd04cacd3f52904ea458a9b1eb477\" returns successfully" Apr 21 10:10:57.063239 containerd[1504]: time="2026-04-21T10:10:57.063190608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:57.064447 containerd[1504]: time="2026-04-21T10:10:57.064364077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:10:57.065390 containerd[1504]: time="2026-04-21T10:10:57.065355024Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:57.069140 containerd[1504]: time="2026-04-21T10:10:57.067364436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:57.069140 containerd[1504]: time="2026-04-21T10:10:57.068773991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.771953715s" Apr 21 10:10:57.069140 containerd[1504]: time="2026-04-21T10:10:57.068808633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:10:57.072630 containerd[1504]: time="2026-04-21T10:10:57.072614669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:10:57.074747 containerd[1504]: time="2026-04-21T10:10:57.074719296Z" level=info msg="CreateContainer within sandbox \"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:10:57.094232 containerd[1504]: time="2026-04-21T10:10:57.094194949Z" level=info msg="CreateContainer within sandbox \"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"81fbe0131b9fc4347ac505066bf04ff38cd35608b06cfca7a196d49316e4ad13\"" Apr 21 10:10:57.094924 containerd[1504]: time="2026-04-21T10:10:57.094845565Z" level=info msg="StartContainer for \"81fbe0131b9fc4347ac505066bf04ff38cd35608b06cfca7a196d49316e4ad13\"" Apr 21 10:10:57.124696 systemd[1]: Started cri-containerd-81fbe0131b9fc4347ac505066bf04ff38cd35608b06cfca7a196d49316e4ad13.scope - libcontainer container 81fbe0131b9fc4347ac505066bf04ff38cd35608b06cfca7a196d49316e4ad13. Apr 21 10:10:57.151537 containerd[1504]: time="2026-04-21T10:10:57.151494730Z" level=info msg="StartContainer for \"81fbe0131b9fc4347ac505066bf04ff38cd35608b06cfca7a196d49316e4ad13\" returns successfully" Apr 21 10:10:57.553611 kubelet[2567]: I0421 10:10:57.553455 2567 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:10:57.555276 kubelet[2567]: I0421 10:10:57.555243 2567 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:10:57.797282 kubelet[2567]: I0421 10:10:57.796904 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7f724" podStartSLOduration=16.653632326 podStartE2EDuration="32.796889974s" podCreationTimestamp="2026-04-21 10:10:25 +0000 UTC" firstStartedPulling="2026-04-21 10:10:40.928557652 +0000 UTC m=+30.538067716" lastFinishedPulling="2026-04-21 10:10:57.0718153 +0000 UTC m=+46.681325364" observedRunningTime="2026-04-21 10:10:57.795850545 +0000 UTC m=+47.405360609" watchObservedRunningTime="2026-04-21 10:10:57.796889974 +0000 UTC m=+47.406400048" Apr 21 10:10:58.998432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073913310.mount: Deactivated successfully. Apr 21 10:10:59.017705 containerd[1504]: time="2026-04-21T10:10:59.017649814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:59.018908 containerd[1504]: time="2026-04-21T10:10:59.018797034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:10:59.020538 containerd[1504]: time="2026-04-21T10:10:59.019620188Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:59.022538 containerd[1504]: time="2026-04-21T10:10:59.021614389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:10:59.022538 containerd[1504]: time="2026-04-21T10:10:59.022126106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.949420621s" Apr 21 10:10:59.022538 containerd[1504]: time="2026-04-21T10:10:59.022146888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:10:59.025257 containerd[1504]: time="2026-04-21T10:10:59.025217052Z" level=info msg="CreateContainer within sandbox \"6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:10:59.048601 containerd[1504]: time="2026-04-21T10:10:59.048548572Z" level=info msg="CreateContainer within sandbox \"6ce2524073f8117d6117b81a11195d47cf7834d8ae5afb4988ffef0c0a299d63\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6d82b2e03bac0540f2245e42ffe180dd9fccf74eb2468a0f10e7e7bd007d710a\"" Apr 21 10:10:59.049210 containerd[1504]: time="2026-04-21T10:10:59.049192968Z" level=info msg="StartContainer for \"6d82b2e03bac0540f2245e42ffe180dd9fccf74eb2468a0f10e7e7bd007d710a\"" Apr 21 10:10:59.076704 systemd[1]: Started cri-containerd-6d82b2e03bac0540f2245e42ffe180dd9fccf74eb2468a0f10e7e7bd007d710a.scope - libcontainer container 6d82b2e03bac0540f2245e42ffe180dd9fccf74eb2468a0f10e7e7bd007d710a. Apr 21 10:10:59.116769 containerd[1504]: time="2026-04-21T10:10:59.116653126Z" level=info msg="StartContainer for \"6d82b2e03bac0540f2245e42ffe180dd9fccf74eb2468a0f10e7e7bd007d710a\" returns successfully" Apr 21 10:10:59.814257 kubelet[2567]: I0421 10:10:59.814017 2567 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c7988f64c-zv8dm" podStartSLOduration=2.272677414 podStartE2EDuration="19.813995496s" podCreationTimestamp="2026-04-21 10:10:40 +0000 UTC" firstStartedPulling="2026-04-21 10:10:41.481542136 +0000 UTC m=+31.091052210" lastFinishedPulling="2026-04-21 10:10:59.022860228 +0000 UTC m=+48.632370292" observedRunningTime="2026-04-21 10:10:59.809748597 +0000 UTC m=+49.419258691" watchObservedRunningTime="2026-04-21 10:10:59.813995496 +0000 UTC m=+49.423505590" Apr 21 10:11:01.150490 kubelet[2567]: I0421 10:11:01.149887 2567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:11:10.480060 containerd[1504]: time="2026-04-21T10:11:10.479352342Z" level=info msg="StopPodSandbox for \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\"" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.542 [WARNING][5339] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0", GenerateName:"calico-kube-controllers-9cb6dbfd9-", Namespace:"calico-system", SelfLink:"", UID:"b3626028-18b5-47bc-859d-14b0862e581b", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb6dbfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6", Pod:"calico-kube-controllers-9cb6dbfd9-g7plt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf1654f9d40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.542 [INFO][5339] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.542 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" iface="eth0" netns="" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.542 [INFO][5339] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.542 [INFO][5339] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.565 [INFO][5346] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.566 [INFO][5346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.566 [INFO][5346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.571 [WARNING][5346] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.571 [INFO][5346] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.573 [INFO][5346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.577334 containerd[1504]: 2026-04-21 10:11:10.575 [INFO][5339] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.577759 containerd[1504]: time="2026-04-21T10:11:10.577389983Z" level=info msg="TearDown network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\" successfully" Apr 21 10:11:10.577759 containerd[1504]: time="2026-04-21T10:11:10.577418856Z" level=info msg="StopPodSandbox for \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\" returns successfully" Apr 21 10:11:10.578314 containerd[1504]: time="2026-04-21T10:11:10.578288682Z" level=info msg="RemovePodSandbox for \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\"" Apr 21 10:11:10.578376 containerd[1504]: time="2026-04-21T10:11:10.578316613Z" level=info msg="Forcibly stopping sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\"" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.611 [WARNING][5361] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0", GenerateName:"calico-kube-controllers-9cb6dbfd9-", Namespace:"calico-system", SelfLink:"", UID:"b3626028-18b5-47bc-859d-14b0862e581b", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9cb6dbfd9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"63e5c4e4d602f3a1295685caa1d5b5e6bae093876559d30a61154d357776c2a6", Pod:"calico-kube-controllers-9cb6dbfd9-g7plt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf1654f9d40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.611 [INFO][5361] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.611 [INFO][5361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" iface="eth0" netns="" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.611 [INFO][5361] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.611 [INFO][5361] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.631 [INFO][5369] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.631 [INFO][5369] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.631 [INFO][5369] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.638 [WARNING][5369] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.638 [INFO][5369] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" HandleID="k8s-pod-network.b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--kube--controllers--9cb6dbfd9--g7plt-eth0" Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.640 [INFO][5369] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.645142 containerd[1504]: 2026-04-21 10:11:10.642 [INFO][5361] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a" Apr 21 10:11:10.645654 containerd[1504]: time="2026-04-21T10:11:10.645191503Z" level=info msg="TearDown network for sandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\" successfully" Apr 21 10:11:10.651601 containerd[1504]: time="2026-04-21T10:11:10.651422531Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:10.651601 containerd[1504]: time="2026-04-21T10:11:10.651515661Z" level=info msg="RemovePodSandbox \"b8a1dea9c173199045a1de5a37cda8e6f310189b98f6392835dea4050274fd6a\" returns successfully" Apr 21 10:11:10.652070 containerd[1504]: time="2026-04-21T10:11:10.652043452Z" level=info msg="StopPodSandbox for \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\"" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.681 [WARNING][5383] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"a753325e-f90f-4e6b-a9d0-a84ed1028eab", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe", Pod:"calico-apiserver-5c9bd8455-plt4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali849863cd68f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.681 [INFO][5383] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.681 [INFO][5383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" iface="eth0" netns="" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.681 [INFO][5383] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.681 [INFO][5383] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.699 [INFO][5390] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.699 [INFO][5390] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.700 [INFO][5390] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.705 [WARNING][5390] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.705 [INFO][5390] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.706 [INFO][5390] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.710840 containerd[1504]: 2026-04-21 10:11:10.708 [INFO][5383] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.711239 containerd[1504]: time="2026-04-21T10:11:10.710871641Z" level=info msg="TearDown network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\" successfully" Apr 21 10:11:10.711239 containerd[1504]: time="2026-04-21T10:11:10.710894285Z" level=info msg="StopPodSandbox for \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\" returns successfully" Apr 21 10:11:10.711324 containerd[1504]: time="2026-04-21T10:11:10.711300794Z" level=info msg="RemovePodSandbox for \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\"" Apr 21 10:11:10.711351 containerd[1504]: time="2026-04-21T10:11:10.711326993Z" level=info msg="Forcibly stopping sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\"" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.738 [WARNING][5404] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"a753325e-f90f-4e6b-a9d0-a84ed1028eab", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"f4c6da568a1610a689ef6472bfbfbe5742d77689c3791c668afe2816374b00fe", Pod:"calico-apiserver-5c9bd8455-plt4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali849863cd68f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.738 [INFO][5404] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.738 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" iface="eth0" netns="" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.738 [INFO][5404] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.738 [INFO][5404] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.757 [INFO][5411] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.757 [INFO][5411] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.758 [INFO][5411] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.765 [WARNING][5411] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.765 [INFO][5411] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" HandleID="k8s-pod-network.93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--plt4c-eth0" Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.766 [INFO][5411] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.772514 containerd[1504]: 2026-04-21 10:11:10.769 [INFO][5404] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16" Apr 21 10:11:10.772514 containerd[1504]: time="2026-04-21T10:11:10.771840189Z" level=info msg="TearDown network for sandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\" successfully" Apr 21 10:11:10.776367 containerd[1504]: time="2026-04-21T10:11:10.776334782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:10.776455 containerd[1504]: time="2026-04-21T10:11:10.776389794Z" level=info msg="RemovePodSandbox \"93b149cdc60b2a8004aff36b53193b197d8f29d2111d190969de847a46bd1a16\" returns successfully" Apr 21 10:11:10.776880 containerd[1504]: time="2026-04-21T10:11:10.776837867Z" level=info msg="StopPodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\"" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.806 [WARNING][5425] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89203aa1-f58e-43c1-aeac-389c8c4e354d", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d", Pod:"coredns-674b8bbfcf-sr8nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad298dc0ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.807 [INFO][5425] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.807 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" iface="eth0" netns="" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.807 [INFO][5425] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.807 [INFO][5425] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.825 [INFO][5432] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.825 [INFO][5432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.825 [INFO][5432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.831 [WARNING][5432] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.831 [INFO][5432] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.832 [INFO][5432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.836985 containerd[1504]: 2026-04-21 10:11:10.834 [INFO][5425] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.836985 containerd[1504]: time="2026-04-21T10:11:10.836866014Z" level=info msg="TearDown network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" successfully" Apr 21 10:11:10.836985 containerd[1504]: time="2026-04-21T10:11:10.836888808Z" level=info msg="StopPodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" returns successfully" Apr 21 10:11:10.837925 containerd[1504]: time="2026-04-21T10:11:10.837447196Z" level=info msg="RemovePodSandbox for \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\"" Apr 21 10:11:10.837925 containerd[1504]: time="2026-04-21T10:11:10.837467106Z" level=info msg="Forcibly stopping sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\"" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.866 [WARNING][5447] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"89203aa1-f58e-43c1-aeac-389c8c4e354d", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"8bc7936549543b17997d29bca814ab8bda6a201395ed3b0d77270035ba1f670d", Pod:"coredns-674b8bbfcf-sr8nw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliad298dc0ef8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.867 [INFO][5447] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.867 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" iface="eth0" netns="" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.867 [INFO][5447] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.867 [INFO][5447] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.884 [INFO][5456] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.884 [INFO][5456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.884 [INFO][5456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.890 [WARNING][5456] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.890 [INFO][5456] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" HandleID="k8s-pod-network.e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--sr8nw-eth0" Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.892 [INFO][5456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.896896 containerd[1504]: 2026-04-21 10:11:10.894 [INFO][5447] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6" Apr 21 10:11:10.897267 containerd[1504]: time="2026-04-21T10:11:10.897062766Z" level=info msg="TearDown network for sandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" successfully" Apr 21 10:11:10.902508 containerd[1504]: time="2026-04-21T10:11:10.902482275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:10.902559 containerd[1504]: time="2026-04-21T10:11:10.902533693Z" level=info msg="RemovePodSandbox \"e96746d65d7d1eb72569943bf2441ffae26c066ef0db63ad961868436fc2b4a6\" returns successfully" Apr 21 10:11:10.902972 containerd[1504]: time="2026-04-21T10:11:10.902949807Z" level=info msg="StopPodSandbox for \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\"" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.929 [WARNING][5472] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"93fb5265-098d-445f-9cde-fcef06ca57d0", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526", Pod:"goldmane-5b85766d88-8gxsz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673ceeec748", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.929 [INFO][5472] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.929 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" iface="eth0" netns="" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.929 [INFO][5472] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.929 [INFO][5472] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.946 [INFO][5479] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.947 [INFO][5479] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.947 [INFO][5479] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.951 [WARNING][5479] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.951 [INFO][5479] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.953 [INFO][5479] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:10.957313 containerd[1504]: 2026-04-21 10:11:10.955 [INFO][5472] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:10.957661 containerd[1504]: time="2026-04-21T10:11:10.957443963Z" level=info msg="TearDown network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\" successfully" Apr 21 10:11:10.957661 containerd[1504]: time="2026-04-21T10:11:10.957468219Z" level=info msg="StopPodSandbox for \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\" returns successfully" Apr 21 10:11:10.958094 containerd[1504]: time="2026-04-21T10:11:10.958064313Z" level=info msg="RemovePodSandbox for \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\"" Apr 21 10:11:10.958094 containerd[1504]: time="2026-04-21T10:11:10.958089310Z" level=info msg="Forcibly stopping sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\"" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:10.985 [WARNING][5493] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"93fb5265-098d-445f-9cde-fcef06ca57d0", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"90771e0f043abf1ae3e442e7c013ff5298ec2a9e9d29e667567beb12ea810526", Pod:"goldmane-5b85766d88-8gxsz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali673ceeec748", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:10.985 [INFO][5493] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:10.985 [INFO][5493] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" iface="eth0" netns="" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:10.985 [INFO][5493] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:10.985 [INFO][5493] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.004 [INFO][5501] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.005 [INFO][5501] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.005 [INFO][5501] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.010 [WARNING][5501] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.010 [INFO][5501] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" HandleID="k8s-pod-network.c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Workload="ci--4081--3--7--a--afac96dda8-k8s-goldmane--5b85766d88--8gxsz-eth0" Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.011 [INFO][5501] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.015462 containerd[1504]: 2026-04-21 10:11:11.013 [INFO][5493] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d" Apr 21 10:11:11.015839 containerd[1504]: time="2026-04-21T10:11:11.015507944Z" level=info msg="TearDown network for sandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\" successfully" Apr 21 10:11:11.019543 containerd[1504]: time="2026-04-21T10:11:11.019490760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:11.019692 containerd[1504]: time="2026-04-21T10:11:11.019561366Z" level=info msg="RemovePodSandbox \"c87bd396d1a8e3e0e307b719502bbbaff7fda9348ac302ea195916c29842153d\" returns successfully" Apr 21 10:11:11.020152 containerd[1504]: time="2026-04-21T10:11:11.020125862Z" level=info msg="StopPodSandbox for \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\"" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.048 [WARNING][5515] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.048 [INFO][5515] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.048 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" iface="eth0" netns="" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.048 [INFO][5515] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.048 [INFO][5515] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.064 [INFO][5522] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.065 [INFO][5522] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.065 [INFO][5522] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.070 [WARNING][5522] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.070 [INFO][5522] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.072 [INFO][5522] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.076691 containerd[1504]: 2026-04-21 10:11:11.074 [INFO][5515] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.076691 containerd[1504]: time="2026-04-21T10:11:11.076476445Z" level=info msg="TearDown network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\" successfully" Apr 21 10:11:11.076691 containerd[1504]: time="2026-04-21T10:11:11.076494943Z" level=info msg="StopPodSandbox for \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\" returns successfully" Apr 21 10:11:11.077049 containerd[1504]: time="2026-04-21T10:11:11.076870005Z" level=info msg="RemovePodSandbox for \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\"" Apr 21 10:11:11.077049 containerd[1504]: time="2026-04-21T10:11:11.076897617Z" level=info msg="Forcibly stopping sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\"" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.114 [WARNING][5536] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" WorkloadEndpoint="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.114 [INFO][5536] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.114 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" iface="eth0" netns="" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.114 [INFO][5536] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.114 [INFO][5536] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.131 [INFO][5543] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.131 [INFO][5543] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.131 [INFO][5543] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.136 [WARNING][5543] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.136 [INFO][5543] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" HandleID="k8s-pod-network.fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Workload="ci--4081--3--7--a--afac96dda8-k8s-whisker--969547dbb--pw986-eth0" Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.137 [INFO][5543] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.141392 containerd[1504]: 2026-04-21 10:11:11.139 [INFO][5536] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af" Apr 21 10:11:11.141724 containerd[1504]: time="2026-04-21T10:11:11.141429213Z" level=info msg="TearDown network for sandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\" successfully" Apr 21 10:11:11.144791 containerd[1504]: time="2026-04-21T10:11:11.144757737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:11.144875 containerd[1504]: time="2026-04-21T10:11:11.144821963Z" level=info msg="RemovePodSandbox \"fe790e0dbeaf498127610b0931bad81ba0e548ad1392df9603afec8306ac67af\" returns successfully" Apr 21 10:11:11.145323 containerd[1504]: time="2026-04-21T10:11:11.145283165Z" level=info msg="StopPodSandbox for \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\"" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.176 [WARNING][5557] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce1fbcf9-9c29-4f48-8a24-9477c46cb787", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53", Pod:"coredns-674b8bbfcf-d9sdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali090703fc46b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.177 [INFO][5557] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.177 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" iface="eth0" netns="" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.177 [INFO][5557] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.177 [INFO][5557] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.196 [INFO][5565] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.196 [INFO][5565] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.196 [INFO][5565] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.201 [WARNING][5565] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.202 [INFO][5565] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.203 [INFO][5565] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.208363 containerd[1504]: 2026-04-21 10:11:11.206 [INFO][5557] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.208840 containerd[1504]: time="2026-04-21T10:11:11.208413099Z" level=info msg="TearDown network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\" successfully" Apr 21 10:11:11.208840 containerd[1504]: time="2026-04-21T10:11:11.208450555Z" level=info msg="StopPodSandbox for \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\" returns successfully" Apr 21 10:11:11.208914 containerd[1504]: time="2026-04-21T10:11:11.208886069Z" level=info msg="RemovePodSandbox for \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\"" Apr 21 10:11:11.208935 containerd[1504]: time="2026-04-21T10:11:11.208919088Z" level=info msg="Forcibly stopping sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\"" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.236 [WARNING][5579] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ce1fbcf9-9c29-4f48-8a24-9477c46cb787", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"c0ae502b4e6212dac0a5d5feb866cb6e1deb83a55188683ddac5ed16bb511f53", Pod:"coredns-674b8bbfcf-d9sdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali090703fc46b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.236 [INFO][5579] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.236 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" iface="eth0" netns="" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.236 [INFO][5579] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.236 [INFO][5579] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.255 [INFO][5586] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.255 [INFO][5586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.255 [INFO][5586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.260 [WARNING][5586] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.260 [INFO][5586] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" HandleID="k8s-pod-network.6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Workload="ci--4081--3--7--a--afac96dda8-k8s-coredns--674b8bbfcf--d9sdn-eth0" Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.261 [INFO][5586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.265303 containerd[1504]: 2026-04-21 10:11:11.263 [INFO][5579] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c" Apr 21 10:11:11.265835 containerd[1504]: time="2026-04-21T10:11:11.265333686Z" level=info msg="TearDown network for sandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\" successfully" Apr 21 10:11:11.269698 containerd[1504]: time="2026-04-21T10:11:11.269619116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:11.269698 containerd[1504]: time="2026-04-21T10:11:11.269699446Z" level=info msg="RemovePodSandbox \"6f3f59ad5bf2ab55b942ffd0384475e7e86c0b6921a5bd2c0c6c6dba576a218c\" returns successfully" Apr 21 10:11:11.270381 containerd[1504]: time="2026-04-21T10:11:11.270144274Z" level=info msg="StopPodSandbox for \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\"" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.298 [WARNING][5600] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5cda479-2770-45d5-b204-a8ebaf013eb6", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9", Pod:"csi-node-driver-7f724", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b12bb0e5da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.298 [INFO][5600] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.298 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" iface="eth0" netns="" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.298 [INFO][5600] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.298 [INFO][5600] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.314 [INFO][5607] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.314 [INFO][5607] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.314 [INFO][5607] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.320 [WARNING][5607] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.320 [INFO][5607] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.321 [INFO][5607] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.325545 containerd[1504]: 2026-04-21 10:11:11.323 [INFO][5600] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.326207 containerd[1504]: time="2026-04-21T10:11:11.325595898Z" level=info msg="TearDown network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\" successfully" Apr 21 10:11:11.326207 containerd[1504]: time="2026-04-21T10:11:11.325620785Z" level=info msg="StopPodSandbox for \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\" returns successfully" Apr 21 10:11:11.326207 containerd[1504]: time="2026-04-21T10:11:11.326066132Z" level=info msg="RemovePodSandbox for \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\"" Apr 21 10:11:11.326207 containerd[1504]: time="2026-04-21T10:11:11.326085922Z" level=info msg="Forcibly stopping sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\"" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.353 [WARNING][5621] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5cda479-2770-45d5-b204-a8ebaf013eb6", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"e94beeb1f72cee23ab125dbf19265c0edb161cf79a58b3053421daedbb797eb9", Pod:"csi-node-driver-7f724", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9b12bb0e5da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.353 [INFO][5621] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.353 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" iface="eth0" netns="" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.353 [INFO][5621] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.353 [INFO][5621] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.371 [INFO][5629] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.371 [INFO][5629] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.371 [INFO][5629] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.376 [WARNING][5629] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.376 [INFO][5629] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" HandleID="k8s-pod-network.6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Workload="ci--4081--3--7--a--afac96dda8-k8s-csi--node--driver--7f724-eth0" Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.378 [INFO][5629] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.384393 containerd[1504]: 2026-04-21 10:11:11.380 [INFO][5621] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530" Apr 21 10:11:11.384393 containerd[1504]: time="2026-04-21T10:11:11.382668794Z" level=info msg="TearDown network for sandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\" successfully" Apr 21 10:11:11.387482 containerd[1504]: time="2026-04-21T10:11:11.387447903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:11.388260 containerd[1504]: time="2026-04-21T10:11:11.387498950Z" level=info msg="RemovePodSandbox \"6a6d80dafe6eeadc528c2ef09feef7b157fe07bbb7a4056fb99371260cfcd530\" returns successfully" Apr 21 10:11:11.388260 containerd[1504]: time="2026-04-21T10:11:11.388008725Z" level=info msg="StopPodSandbox for \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\"" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.417 [WARNING][5643] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"4c0203ee-d1e8-4697-85ba-00626b1ad292", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea", Pod:"calico-apiserver-5c9bd8455-nbw87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliecedd847344", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.417 [INFO][5643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.417 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" iface="eth0" netns="" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.418 [INFO][5643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.418 [INFO][5643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.437 [INFO][5650] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.437 [INFO][5650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.437 [INFO][5650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.442 [WARNING][5650] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.442 [INFO][5650] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.444 [INFO][5650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.448115 containerd[1504]: 2026-04-21 10:11:11.445 [INFO][5643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.448480 containerd[1504]: time="2026-04-21T10:11:11.448172619Z" level=info msg="TearDown network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\" successfully" Apr 21 10:11:11.448480 containerd[1504]: time="2026-04-21T10:11:11.448194601Z" level=info msg="StopPodSandbox for \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\" returns successfully" Apr 21 10:11:11.449045 containerd[1504]: time="2026-04-21T10:11:11.448759008Z" level=info msg="RemovePodSandbox for \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\"" Apr 21 10:11:11.449045 containerd[1504]: time="2026-04-21T10:11:11.448792809Z" level=info msg="Forcibly stopping sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\"" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.477 [WARNING][5665] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0", GenerateName:"calico-apiserver-5c9bd8455-", Namespace:"calico-system", SelfLink:"", UID:"4c0203ee-d1e8-4697-85ba-00626b1ad292", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 10, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9bd8455", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-a-afac96dda8", ContainerID:"f77e25d1a5a36c47fedf919f66f3076517cffa81ba3835aa7e1afef5152a29ea", Pod:"calico-apiserver-5c9bd8455-nbw87", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliecedd847344", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.478 [INFO][5665] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.478 [INFO][5665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" iface="eth0" netns="" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.478 [INFO][5665] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.478 [INFO][5665] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.496 [INFO][5673] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.496 [INFO][5673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.496 [INFO][5673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.502 [WARNING][5673] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.502 [INFO][5673] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" HandleID="k8s-pod-network.ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Workload="ci--4081--3--7--a--afac96dda8-k8s-calico--apiserver--5c9bd8455--nbw87-eth0" Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.503 [INFO][5673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:11:11.507931 containerd[1504]: 2026-04-21 10:11:11.505 [INFO][5665] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7" Apr 21 10:11:11.507931 containerd[1504]: time="2026-04-21T10:11:11.507853087Z" level=info msg="TearDown network for sandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\" successfully" Apr 21 10:11:11.512284 containerd[1504]: time="2026-04-21T10:11:11.512242813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:11:11.512336 containerd[1504]: time="2026-04-21T10:11:11.512316734Z" level=info msg="RemovePodSandbox \"ae679cf5b1159ae3c5c037a67b57a5084baabb61260dac9230f387f380835ea7\" returns successfully" Apr 21 10:11:20.120436 systemd[1]: run-containerd-runc-k8s.io-e97d8e08b061d36ac26c0a8a4ff9047514af1be43302e9530d9c81cbbf33c58d-runc.P7vLo5.mount: Deactivated successfully. Apr 21 10:11:24.967870 systemd[1]: Started sshd@8-46.62.167.141:22-50.85.169.122:48406.service - OpenSSH per-connection server daemon (50.85.169.122:48406). Apr 21 10:11:25.192058 sshd[5786]: Accepted publickey for core from 50.85.169.122 port 48406 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:25.193605 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:25.198964 systemd-logind[1477]: New session 8 of user core. Apr 21 10:11:25.203677 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:11:25.441528 sshd[5786]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:25.446773 systemd[1]: sshd@8-46.62.167.141:22-50.85.169.122:48406.service: Deactivated successfully. Apr 21 10:11:25.449433 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:11:25.450666 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:11:25.451439 systemd-logind[1477]: Removed session 8. Apr 21 10:11:26.108946 systemd[1]: run-containerd-runc-k8s.io-43a35eaba390a98d248b39a3bb0ef077000eb018fef0a0af4ccad17d6467d72e-runc.V5PNZa.mount: Deactivated successfully. Apr 21 10:11:30.489021 systemd[1]: Started sshd@9-46.62.167.141:22-50.85.169.122:44754.service - OpenSSH per-connection server daemon (50.85.169.122:44754). Apr 21 10:11:30.705466 sshd[5827]: Accepted publickey for core from 50.85.169.122 port 44754 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:30.707337 sshd[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:30.712262 systemd-logind[1477]: New session 9 of user core. Apr 21 10:11:30.719731 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:11:30.940944 sshd[5827]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:30.946330 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:11:30.947234 systemd[1]: sshd@9-46.62.167.141:22-50.85.169.122:44754.service: Deactivated successfully. Apr 21 10:11:30.949431 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:11:30.951286 systemd-logind[1477]: Removed session 9. Apr 21 10:11:35.988088 systemd[1]: Started sshd@10-46.62.167.141:22-50.85.169.122:44756.service - OpenSSH per-connection server daemon (50.85.169.122:44756). Apr 21 10:11:36.213212 sshd[5841]: Accepted publickey for core from 50.85.169.122 port 44756 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:36.216525 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:36.225136 systemd-logind[1477]: New session 10 of user core. Apr 21 10:11:36.229842 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:11:36.474004 sshd[5841]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:36.478454 systemd[1]: sshd@10-46.62.167.141:22-50.85.169.122:44756.service: Deactivated successfully. Apr 21 10:11:36.482335 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:11:36.484773 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:11:36.487408 systemd-logind[1477]: Removed session 10. Apr 21 10:11:41.529802 systemd[1]: Started sshd@11-46.62.167.141:22-50.85.169.122:43994.service - OpenSSH per-connection server daemon (50.85.169.122:43994). Apr 21 10:11:41.752676 sshd[5871]: Accepted publickey for core from 50.85.169.122 port 43994 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:41.756044 sshd[5871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:41.765444 systemd-logind[1477]: New session 11 of user core. Apr 21 10:11:41.770704 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:11:42.022828 sshd[5871]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:42.027272 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:11:42.027440 systemd[1]: sshd@11-46.62.167.141:22-50.85.169.122:43994.service: Deactivated successfully. Apr 21 10:11:42.029332 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:11:42.030364 systemd-logind[1477]: Removed session 11. Apr 21 10:11:42.067906 systemd[1]: Started sshd@12-46.62.167.141:22-50.85.169.122:44002.service - OpenSSH per-connection server daemon (50.85.169.122:44002). Apr 21 10:11:42.281044 sshd[5885]: Accepted publickey for core from 50.85.169.122 port 44002 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:42.283898 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:42.292112 systemd-logind[1477]: New session 12 of user core. Apr 21 10:11:42.296896 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:11:42.571086 sshd[5885]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:42.574608 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:11:42.575475 systemd[1]: sshd@12-46.62.167.141:22-50.85.169.122:44002.service: Deactivated successfully. Apr 21 10:11:42.577297 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:11:42.578430 systemd-logind[1477]: Removed session 12. Apr 21 10:11:42.609198 systemd[1]: Started sshd@13-46.62.167.141:22-50.85.169.122:44008.service - OpenSSH per-connection server daemon (50.85.169.122:44008). Apr 21 10:11:42.822979 sshd[5896]: Accepted publickey for core from 50.85.169.122 port 44008 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:42.825070 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:42.832935 systemd-logind[1477]: New session 13 of user core. Apr 21 10:11:42.839854 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:11:43.111258 sshd[5896]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:43.115438 systemd[1]: sshd@13-46.62.167.141:22-50.85.169.122:44008.service: Deactivated successfully. Apr 21 10:11:43.117808 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:11:43.119087 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:11:43.121462 systemd-logind[1477]: Removed session 13. Apr 21 10:11:48.156350 systemd[1]: Started sshd@14-46.62.167.141:22-50.85.169.122:44010.service - OpenSSH per-connection server daemon (50.85.169.122:44010). Apr 21 10:11:48.374656 sshd[5911]: Accepted publickey for core from 50.85.169.122 port 44010 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:48.376821 sshd[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:48.383434 systemd-logind[1477]: New session 14 of user core. Apr 21 10:11:48.392845 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:11:48.618028 sshd[5911]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:48.622470 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:11:48.623044 systemd[1]: sshd@14-46.62.167.141:22-50.85.169.122:44010.service: Deactivated successfully. Apr 21 10:11:48.624771 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:11:48.626687 systemd-logind[1477]: Removed session 14. Apr 21 10:11:48.666244 systemd[1]: Started sshd@15-46.62.167.141:22-50.85.169.122:44022.service - OpenSSH per-connection server daemon (50.85.169.122:44022). Apr 21 10:11:48.871842 sshd[5924]: Accepted publickey for core from 50.85.169.122 port 44022 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:48.874306 sshd[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:48.884733 systemd-logind[1477]: New session 15 of user core. Apr 21 10:11:48.892881 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:11:49.315095 sshd[5924]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:49.319074 systemd[1]: sshd@15-46.62.167.141:22-50.85.169.122:44022.service: Deactivated successfully. Apr 21 10:11:49.322526 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:11:49.325942 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:11:49.327487 systemd-logind[1477]: Removed session 15. Apr 21 10:11:49.368501 systemd[1]: Started sshd@16-46.62.167.141:22-50.85.169.122:44034.service - OpenSSH per-connection server daemon (50.85.169.122:44034). Apr 21 10:11:49.592357 sshd[5936]: Accepted publickey for core from 50.85.169.122 port 44034 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:49.595319 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:49.601638 systemd-logind[1477]: New session 16 of user core. Apr 21 10:11:49.609910 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:11:50.306028 sshd[5936]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:50.308862 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:11:50.310613 systemd[1]: sshd@16-46.62.167.141:22-50.85.169.122:44034.service: Deactivated successfully. Apr 21 10:11:50.312305 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:11:50.313435 systemd-logind[1477]: Removed session 16. Apr 21 10:11:50.346786 systemd[1]: Started sshd@17-46.62.167.141:22-50.85.169.122:42990.service - OpenSSH per-connection server daemon (50.85.169.122:42990). Apr 21 10:11:50.552631 sshd[6006]: Accepted publickey for core from 50.85.169.122 port 42990 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:50.554229 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:50.561447 systemd-logind[1477]: New session 17 of user core. Apr 21 10:11:50.566853 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:11:50.891692 sshd[6006]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:50.896374 systemd[1]: sshd@17-46.62.167.141:22-50.85.169.122:42990.service: Deactivated successfully. Apr 21 10:11:50.898687 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:11:50.899408 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:11:50.900774 systemd-logind[1477]: Removed session 17. Apr 21 10:11:50.938938 systemd[1]: Started sshd@18-46.62.167.141:22-50.85.169.122:42994.service - OpenSSH per-connection server daemon (50.85.169.122:42994). Apr 21 10:11:51.154276 sshd[6017]: Accepted publickey for core from 50.85.169.122 port 42994 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:51.157088 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:51.166875 systemd-logind[1477]: New session 18 of user core. Apr 21 10:11:51.171819 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:11:51.377370 sshd[6017]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:51.381212 systemd[1]: sshd@18-46.62.167.141:22-50.85.169.122:42994.service: Deactivated successfully. Apr 21 10:11:51.383322 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:11:51.385009 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:11:51.386241 systemd-logind[1477]: Removed session 18. Apr 21 10:11:56.428059 systemd[1]: Started sshd@19-46.62.167.141:22-50.85.169.122:42996.service - OpenSSH per-connection server daemon (50.85.169.122:42996). Apr 21 10:11:56.662931 sshd[6051]: Accepted publickey for core from 50.85.169.122 port 42996 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:11:56.666044 sshd[6051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:11:56.675719 systemd-logind[1477]: New session 19 of user core. Apr 21 10:11:56.685848 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:11:56.932720 sshd[6051]: pam_unix(sshd:session): session closed for user core Apr 21 10:11:56.939247 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:11:56.940911 systemd[1]: sshd@19-46.62.167.141:22-50.85.169.122:42996.service: Deactivated successfully. Apr 21 10:11:56.943759 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:11:56.944780 systemd-logind[1477]: Removed session 19. Apr 21 10:12:01.981073 systemd[1]: Started sshd@20-46.62.167.141:22-50.85.169.122:41174.service - OpenSSH per-connection server daemon (50.85.169.122:41174). Apr 21 10:12:02.188367 sshd[6075]: Accepted publickey for core from 50.85.169.122 port 41174 ssh2: RSA SHA256:TvBbOcsuuAb0TxLbWRb2Fse4xp/uEIqA97k9hHQoLKY Apr 21 10:12:02.191443 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:12:02.199756 systemd-logind[1477]: New session 20 of user core. Apr 21 10:12:02.205834 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:12:02.441654 sshd[6075]: pam_unix(sshd:session): session closed for user core Apr 21 10:12:02.445295 systemd[1]: sshd@20-46.62.167.141:22-50.85.169.122:41174.service: Deactivated successfully. Apr 21 10:12:02.447262 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:12:02.449086 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:12:02.451022 systemd-logind[1477]: Removed session 20. Apr 21 10:12:19.974642 kubelet[2567]: E0421 10:12:19.974551 2567 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46322->10.0.0.2:2379: read: connection timed out" Apr 21 10:12:20.294773 systemd[1]: cri-containerd-02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370.scope: Deactivated successfully. Apr 21 10:12:20.295763 systemd[1]: cri-containerd-02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370.scope: Consumed 3.265s CPU time, 17.2M memory peak, 0B memory swap peak. Apr 21 10:12:20.330169 containerd[1504]: time="2026-04-21T10:12:20.330090307Z" level=info msg="shim disconnected" id=02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370 namespace=k8s.io Apr 21 10:12:20.330169 containerd[1504]: time="2026-04-21T10:12:20.330141703Z" level=warning msg="cleaning up after shim disconnected" id=02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370 namespace=k8s.io Apr 21 10:12:20.331861 containerd[1504]: time="2026-04-21T10:12:20.330148824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:20.332144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370-rootfs.mount: Deactivated successfully. Apr 21 10:12:20.971101 systemd[1]: cri-containerd-f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef.scope: Deactivated successfully. Apr 21 10:12:20.971773 systemd[1]: cri-containerd-f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef.scope: Consumed 7.546s CPU time. Apr 21 10:12:20.984130 kubelet[2567]: I0421 10:12:20.983709 2567 scope.go:117] "RemoveContainer" containerID="02879fced05f1553806ed0576ddba1f8170f93d02d42956c5737b6a177f85370" Apr 21 10:12:20.986476 containerd[1504]: time="2026-04-21T10:12:20.986421523Z" level=info msg="CreateContainer within sandbox \"a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:12:21.009968 containerd[1504]: time="2026-04-21T10:12:21.009850685Z" level=info msg="CreateContainer within sandbox \"a0ef7518e8abb9639552d56d413a1ab0c02f05e3fc85169a22c997915f8ed162\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"74c2776e0285e37fef7d08bdd0313cdcf3af3b41d5cad56ab7266b6117dc1a9b\"" Apr 21 10:12:21.011040 containerd[1504]: time="2026-04-21T10:12:21.010842372Z" level=info msg="StartContainer for \"74c2776e0285e37fef7d08bdd0313cdcf3af3b41d5cad56ab7266b6117dc1a9b\"" Apr 21 10:12:21.030217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef-rootfs.mount: Deactivated successfully. Apr 21 10:12:21.030737 containerd[1504]: time="2026-04-21T10:12:21.030501611Z" level=info msg="shim disconnected" id=f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef namespace=k8s.io Apr 21 10:12:21.030737 containerd[1504]: time="2026-04-21T10:12:21.030549933Z" level=warning msg="cleaning up after shim disconnected" id=f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef namespace=k8s.io Apr 21 10:12:21.030737 containerd[1504]: time="2026-04-21T10:12:21.030557724Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:21.047385 systemd[1]: Started cri-containerd-74c2776e0285e37fef7d08bdd0313cdcf3af3b41d5cad56ab7266b6117dc1a9b.scope - libcontainer container 74c2776e0285e37fef7d08bdd0313cdcf3af3b41d5cad56ab7266b6117dc1a9b. Apr 21 10:12:21.053732 containerd[1504]: time="2026-04-21T10:12:21.052880020Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:12:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:12:21.085276 containerd[1504]: time="2026-04-21T10:12:21.085240420Z" level=info msg="StartContainer for \"74c2776e0285e37fef7d08bdd0313cdcf3af3b41d5cad56ab7266b6117dc1a9b\" returns successfully" Apr 21 10:12:21.989135 kubelet[2567]: I0421 10:12:21.989079 2567 scope.go:117] "RemoveContainer" containerID="f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef" Apr 21 10:12:21.991502 containerd[1504]: time="2026-04-21T10:12:21.991451725Z" level=info msg="CreateContainer within sandbox \"5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:12:22.003374 systemd[1]: run-containerd-runc-k8s.io-74c2776e0285e37fef7d08bdd0313cdcf3af3b41d5cad56ab7266b6117dc1a9b-runc.hE4pTn.mount: Deactivated successfully. Apr 21 10:12:22.009588 containerd[1504]: time="2026-04-21T10:12:22.009392635Z" level=info msg="CreateContainer within sandbox \"5a28f7d39b0e84c89d3b677f48460c40acd4be354fb162aceb669bd76ded6179\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329\"" Apr 21 10:12:22.010051 containerd[1504]: time="2026-04-21T10:12:22.010028853Z" level=info msg="StartContainer for \"6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329\"" Apr 21 10:12:22.037699 systemd[1]: Started cri-containerd-6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329.scope - libcontainer container 6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329. Apr 21 10:12:22.061726 containerd[1504]: time="2026-04-21T10:12:22.061690258Z" level=info msg="StartContainer for \"6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329\" returns successfully" Apr 21 10:12:23.533944 systemd[1]: Started sshd@21-46.62.167.141:22-2.57.122.177:50890.service - OpenSSH per-connection server daemon (2.57.122.177:50890). Apr 21 10:12:23.612135 sshd[6314]: Connection closed by 2.57.122.177 port 50890 Apr 21 10:12:23.613678 systemd[1]: sshd@21-46.62.167.141:22-2.57.122.177:50890.service: Deactivated successfully. Apr 21 10:12:24.968235 kubelet[2567]: E0421 10:12:24.966119 2567 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46146->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-7-a-afac96dda8.18a8579149e64bf2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-7-a-afac96dda8,UID:84d7926e3f6ea01b1c7772dc3f09babd,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-a-afac96dda8,},FirstTimestamp:2026-04-21 10:12:14.519364594 +0000 UTC m=+124.128874658,LastTimestamp:2026-04-21 10:12:14.519364594 +0000 UTC m=+124.128874658,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-a-afac96dda8,}" Apr 21 10:12:25.170609 systemd[1]: cri-containerd-4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca.scope: Deactivated successfully. Apr 21 10:12:25.171219 systemd[1]: cri-containerd-4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca.scope: Consumed 1.425s CPU time, 15.1M memory peak, 0B memory swap peak. Apr 21 10:12:25.195246 containerd[1504]: time="2026-04-21T10:12:25.194113229Z" level=info msg="shim disconnected" id=4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca namespace=k8s.io Apr 21 10:12:25.195246 containerd[1504]: time="2026-04-21T10:12:25.194161161Z" level=warning msg="cleaning up after shim disconnected" id=4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca namespace=k8s.io Apr 21 10:12:25.195246 containerd[1504]: time="2026-04-21T10:12:25.194180218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:25.194783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca-rootfs.mount: Deactivated successfully. Apr 21 10:12:26.004290 kubelet[2567]: I0421 10:12:26.004226 2567 scope.go:117] "RemoveContainer" containerID="4c5fc2fc8b89df7b750984e837ee5d216d9f9ee776f4fb4efb75b0d6c97812ca" Apr 21 10:12:26.007337 containerd[1504]: time="2026-04-21T10:12:26.007289654Z" level=info msg="CreateContainer within sandbox \"f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 21 10:12:26.028781 containerd[1504]: time="2026-04-21T10:12:26.027482455Z" level=info msg="CreateContainer within sandbox \"f226a4361b1e9f35af9b20d7ac4efa219a6b2eea5af7f181f28ef05154df30d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9d137d95b234ba7f5a737c7cef4e8046cfd74ee1a029ec84ec1e86397d8850a6\"" Apr 21 10:12:26.031384 containerd[1504]: time="2026-04-21T10:12:26.030197602Z" level=info msg="StartContainer for \"9d137d95b234ba7f5a737c7cef4e8046cfd74ee1a029ec84ec1e86397d8850a6\"" Apr 21 10:12:26.078694 systemd[1]: Started cri-containerd-9d137d95b234ba7f5a737c7cef4e8046cfd74ee1a029ec84ec1e86397d8850a6.scope - libcontainer container 9d137d95b234ba7f5a737c7cef4e8046cfd74ee1a029ec84ec1e86397d8850a6. Apr 21 10:12:26.130675 containerd[1504]: time="2026-04-21T10:12:26.130182238Z" level=info msg="StartContainer for \"9d137d95b234ba7f5a737c7cef4e8046cfd74ee1a029ec84ec1e86397d8850a6\" returns successfully" Apr 21 10:12:29.975393 kubelet[2567]: E0421 10:12:29.975296 2567 controller.go:195] "Failed to update lease" err="Put \"https://46.62.167.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-afac96dda8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 21 10:12:30.981753 kubelet[2567]: I0421 10:12:30.981690 2567 status_manager.go:895] "Failed to get status for pod" podUID="84d7926e3f6ea01b1c7772dc3f09babd" pod="kube-system/kube-apiserver-ci-4081-3-7-a-afac96dda8" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:46246->10.0.0.2:2379: read: connection timed out" Apr 21 10:12:33.227897 systemd[1]: cri-containerd-6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329.scope: Deactivated successfully. Apr 21 10:12:33.275204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329-rootfs.mount: Deactivated successfully. Apr 21 10:12:33.279987 containerd[1504]: time="2026-04-21T10:12:33.279880167Z" level=info msg="shim disconnected" id=6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329 namespace=k8s.io Apr 21 10:12:33.279987 containerd[1504]: time="2026-04-21T10:12:33.279979064Z" level=warning msg="cleaning up after shim disconnected" id=6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329 namespace=k8s.io Apr 21 10:12:33.281146 containerd[1504]: time="2026-04-21T10:12:33.280005042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:12:34.027680 kubelet[2567]: I0421 10:12:34.027588 2567 scope.go:117] "RemoveContainer" containerID="f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef" Apr 21 10:12:34.028500 kubelet[2567]: I0421 10:12:34.028087 2567 scope.go:117] "RemoveContainer" containerID="6cf49c1a7e2003dbefc7b3e4060a73ee3de93b96dbb918f2fcf6bd76d277f329" Apr 21 10:12:34.028500 kubelet[2567]: E0421 10:12:34.028386 2567 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-7l65b_tigera-operator(1477713e-818f-47b5-beaa-604d0169758e)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-7l65b" podUID="1477713e-818f-47b5-beaa-604d0169758e" Apr 21 10:12:34.030026 containerd[1504]: time="2026-04-21T10:12:34.029737417Z" level=info msg="RemoveContainer for \"f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef\"" Apr 21 10:12:34.036280 containerd[1504]: time="2026-04-21T10:12:34.036221581Z" level=info msg="RemoveContainer for \"f001a12ef4c864dd05d205af0e1ad35d96b9578b42b20026f64eb40d9bd8aaef\" returns successfully" Apr 21 10:12:39.976688 kubelet[2567]: E0421 10:12:39.975995 2567 controller.go:195] "Failed to update lease" err="Put \"https://46.62.167.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-a-afac96dda8?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"