Apr 24 23:34:42.886915 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 24 23:34:42.886937 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 24 22:19:35 -00 2026 Apr 24 23:34:42.886947 kernel: KASLR enabled Apr 24 23:34:42.886953 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 24 23:34:42.886959 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 24 23:34:42.886965 kernel: random: crng init done Apr 24 23:34:42.886972 kernel: ACPI: Early table checksum verification disabled Apr 24 23:34:42.886978 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 24 23:34:42.886984 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 24 23:34:42.886992 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.886998 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887004 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887010 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887016 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887024 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887031 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887038 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887044 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 23:34:42.887051 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 24 23:34:42.887064 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 24 23:34:42.887071 kernel: NUMA: Failed to initialise from firmware Apr 24 23:34:42.887077 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 24 23:34:42.887084 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Apr 24 23:34:42.887090 kernel: Zone ranges: Apr 24 23:34:42.887096 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 24 23:34:42.887104 kernel: DMA32 empty Apr 24 23:34:42.887111 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 24 23:34:42.887117 kernel: Movable zone start for each node Apr 24 23:34:42.887123 kernel: Early memory node ranges Apr 24 23:34:42.887130 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 24 23:34:42.887137 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 24 23:34:42.887143 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 24 23:34:42.887150 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 24 23:34:42.887156 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 24 23:34:42.887162 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 24 23:34:42.887168 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 24 23:34:42.887175 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 24 23:34:42.887183 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 24 23:34:42.887189 kernel: psci: probing for conduit method from ACPI. Apr 24 23:34:42.887195 kernel: psci: PSCIv1.1 detected in firmware. Apr 24 23:34:42.887204 kernel: psci: Using standard PSCI v0.2 function IDs Apr 24 23:34:42.887211 kernel: psci: Trusted OS migration not required Apr 24 23:34:42.887218 kernel: psci: SMC Calling Convention v1.1 Apr 24 23:34:42.887235 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 24 23:34:42.887244 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 24 23:34:42.887251 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 24 23:34:42.887258 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 24 23:34:42.887265 kernel: Detected PIPT I-cache on CPU0 Apr 24 23:34:42.887271 kernel: CPU features: detected: GIC system register CPU interface Apr 24 23:34:42.887278 kernel: CPU features: detected: Hardware dirty bit management Apr 24 23:34:42.887285 kernel: CPU features: detected: Spectre-v4 Apr 24 23:34:42.887292 kernel: CPU features: detected: Spectre-BHB Apr 24 23:34:42.887298 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 24 23:34:42.887312 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 24 23:34:42.887319 kernel: CPU features: detected: ARM erratum 1418040 Apr 24 23:34:42.887326 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 24 23:34:42.887333 kernel: alternatives: applying boot alternatives Apr 24 23:34:42.887344 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:34:42.887352 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 23:34:42.887359 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 23:34:42.887366 kernel: Fallback order for Node 0: 0 Apr 24 23:34:42.887373 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 24 23:34:42.887379 kernel: Policy zone: Normal Apr 24 23:34:42.887386 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 23:34:42.887394 kernel: software IO TLB: area num 2. Apr 24 23:34:42.887401 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 24 23:34:42.887409 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Apr 24 23:34:42.887416 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 23:34:42.887423 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 23:34:42.887430 kernel: rcu: RCU event tracing is enabled. Apr 24 23:34:42.887437 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 23:34:42.887444 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 23:34:42.887451 kernel: Tracing variant of Tasks RCU enabled. Apr 24 23:34:42.887458 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 23:34:42.887465 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 23:34:42.887472 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 24 23:34:42.887480 kernel: GICv3: 256 SPIs implemented Apr 24 23:34:42.887487 kernel: GICv3: 0 Extended SPIs implemented Apr 24 23:34:42.887494 kernel: Root IRQ handler: gic_handle_irq Apr 24 23:34:42.887501 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 24 23:34:42.887508 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 24 23:34:42.887515 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 24 23:34:42.887522 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 24 23:34:42.887528 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 24 23:34:42.887535 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 24 23:34:42.887542 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 24 23:34:42.887549 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 23:34:42.887565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 24 23:34:42.887573 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 24 23:34:42.887593 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 24 23:34:42.887602 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 24 23:34:42.887608 kernel: Console: colour dummy device 80x25 Apr 24 23:34:42.887616 kernel: ACPI: Core revision 20230628 Apr 24 23:34:42.887623 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 24 23:34:42.887630 kernel: pid_max: default: 32768 minimum: 301 Apr 24 23:34:42.887637 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 24 23:34:42.887644 kernel: landlock: Up and running. Apr 24 23:34:42.887654 kernel: SELinux: Initializing. Apr 24 23:34:42.887661 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:34:42.887668 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 23:34:42.887675 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:42.887682 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 23:34:42.887689 kernel: rcu: Hierarchical SRCU implementation. Apr 24 23:34:42.887696 kernel: rcu: Max phase no-delay instances is 400. Apr 24 23:34:42.887703 kernel: Platform MSI: ITS@0x8080000 domain created Apr 24 23:34:42.887710 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 24 23:34:42.887719 kernel: Remapping and enabling EFI services. Apr 24 23:34:42.887726 kernel: smp: Bringing up secondary CPUs ... Apr 24 23:34:42.887733 kernel: Detected PIPT I-cache on CPU1 Apr 24 23:34:42.887740 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 24 23:34:42.887747 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 24 23:34:42.887754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 24 23:34:42.887761 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 24 23:34:42.887768 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 23:34:42.887775 kernel: SMP: Total of 2 processors activated. Apr 24 23:34:42.887784 kernel: CPU features: detected: 32-bit EL0 Support Apr 24 23:34:42.887791 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 24 23:34:42.887798 kernel: CPU features: detected: Common not Private translations Apr 24 23:34:42.887810 kernel: CPU features: detected: CRC32 instructions Apr 24 23:34:42.887819 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 24 23:34:42.887827 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 24 23:34:42.887834 kernel: CPU features: detected: LSE atomic instructions Apr 24 23:34:42.887841 kernel: CPU features: detected: Privileged Access Never Apr 24 23:34:42.887849 kernel: CPU features: detected: RAS Extension Support Apr 24 23:34:42.887858 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 24 23:34:42.887865 kernel: CPU: All CPU(s) started at EL1 Apr 24 23:34:42.887873 kernel: alternatives: applying system-wide alternatives Apr 24 23:34:42.887880 kernel: devtmpfs: initialized Apr 24 23:34:42.887887 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 23:34:42.887895 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 23:34:42.887902 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 23:34:42.887910 kernel: SMBIOS 3.0.0 present. Apr 24 23:34:42.887919 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 24 23:34:42.887926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 23:34:42.887934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 24 23:34:42.887941 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 24 23:34:42.887949 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 24 23:34:42.887956 kernel: audit: initializing netlink subsys (disabled) Apr 24 23:34:42.887964 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Apr 24 23:34:42.887971 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 23:34:42.887978 kernel: cpuidle: using governor menu Apr 24 23:34:42.887987 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 24 23:34:42.887994 kernel: ASID allocator initialised with 32768 entries Apr 24 23:34:42.888002 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 23:34:42.888009 kernel: Serial: AMBA PL011 UART driver Apr 24 23:34:42.888017 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 24 23:34:42.888024 kernel: Modules: 0 pages in range for non-PLT usage Apr 24 23:34:42.888031 kernel: Modules: 509008 pages in range for PLT usage Apr 24 23:34:42.888043 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 23:34:42.888050 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 23:34:42.888060 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 24 23:34:42.888067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 24 23:34:42.888075 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 23:34:42.888082 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 23:34:42.888089 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 24 23:34:42.888097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 24 23:34:42.888104 kernel: ACPI: Added _OSI(Module Device) Apr 24 23:34:42.888111 kernel: ACPI: Added _OSI(Processor Device) Apr 24 23:34:42.888118 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 23:34:42.888128 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 23:34:42.888135 kernel: ACPI: Interpreter enabled Apr 24 23:34:42.888142 kernel: ACPI: Using GIC for interrupt routing Apr 24 23:34:42.888150 kernel: ACPI: MCFG table detected, 1 entries Apr 24 23:34:42.888157 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 24 23:34:42.888165 kernel: printk: console [ttyAMA0] enabled Apr 24 23:34:42.888172 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 23:34:42.888346 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 23:34:42.888426 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 24 23:34:42.888516 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 24 23:34:42.889647 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 24 23:34:42.889775 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 24 23:34:42.889787 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 24 23:34:42.889795 kernel: PCI host bridge to bus 0000:00 Apr 24 23:34:42.889877 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 24 23:34:42.889955 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 24 23:34:42.890027 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 24 23:34:42.890100 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 23:34:42.890197 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 24 23:34:42.890335 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 24 23:34:42.890410 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 24 23:34:42.890490 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 24 23:34:42.890575 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.890975 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 24 23:34:42.891075 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.891148 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 24 23:34:42.891221 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.891318 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 24 23:34:42.891412 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.891483 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 24 23:34:42.891556 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.892719 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 24 23:34:42.892812 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.892898 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 24 23:34:42.893000 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.893078 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 24 23:34:42.893195 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.893316 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 24 23:34:42.893397 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 24 23:34:42.893466 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 24 23:34:42.893545 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 24 23:34:42.893627 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 24 23:34:42.893720 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 24 23:34:42.893793 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 24 23:34:42.893863 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 24 23:34:42.893932 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 24 23:34:42.894008 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 24 23:34:42.894087 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 24 23:34:42.894165 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 24 23:34:42.894245 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 24 23:34:42.894317 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 24 23:34:42.894398 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 24 23:34:42.894477 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 24 23:34:42.894568 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 24 23:34:42.895757 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 24 23:34:42.895850 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 24 23:34:42.895936 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 24 23:34:42.896006 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 24 23:34:42.896074 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 24 23:34:42.896157 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 24 23:34:42.896225 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 24 23:34:42.896312 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 24 23:34:42.896381 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 24 23:34:42.896450 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 24 23:34:42.896517 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 24 23:34:42.896623 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 24 23:34:42.896706 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 24 23:34:42.896783 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 24 23:34:42.896851 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 24 23:34:42.896936 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 24 23:34:42.897006 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 24 23:34:42.897074 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 24 23:34:42.897148 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 24 23:34:42.897215 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 24 23:34:42.897336 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 24 23:34:42.897409 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 24 23:34:42.897476 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 24 23:34:42.897544 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 24 23:34:42.899495 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 24 23:34:42.899578 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 24 23:34:42.899676 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 24 23:34:42.899755 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 24 23:34:42.899821 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 24 23:34:42.899887 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 24 23:34:42.899965 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 24 23:34:42.900041 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 24 23:34:42.900119 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 24 23:34:42.900198 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 24 23:34:42.900324 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 24 23:34:42.900396 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 24 23:34:42.900500 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 24 23:34:42.900578 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 24 23:34:42.900698 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 24 23:34:42.900783 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 24 23:34:42.900856 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 24 23:34:42.900928 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 24 23:34:42.900995 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 24 23:34:42.901059 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 24 23:34:42.901126 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 24 23:34:42.901192 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 24 23:34:42.901276 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 24 23:34:42.901345 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 24 23:34:42.901415 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 24 23:34:42.901483 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 24 23:34:42.901561 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 24 23:34:42.902051 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 24 23:34:42.902132 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 24 23:34:42.902226 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 24 23:34:42.902326 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 24 23:34:42.902401 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 24 23:34:42.902469 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 24 23:34:42.902534 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 24 23:34:42.904664 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 24 23:34:42.904759 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 24 23:34:42.904841 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 24 23:34:42.904923 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 24 23:34:42.904993 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 24 23:34:42.905064 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 24 23:34:42.905132 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 24 23:34:42.905198 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 24 23:34:42.905303 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 24 23:34:42.905384 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 24 23:34:42.905453 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 24 23:34:42.905519 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 24 23:34:42.905621 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 24 23:34:42.905705 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 24 23:34:42.905773 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 24 23:34:42.905838 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 24 23:34:42.905919 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 24 23:34:42.905996 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 24 23:34:42.906067 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 24 23:34:42.906136 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 24 23:34:42.906204 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 24 23:34:42.906294 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 24 23:34:42.906364 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 24 23:34:42.906430 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 24 23:34:42.906502 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 24 23:34:42.906574 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 24 23:34:42.906694 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 24 23:34:42.906763 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 24 23:34:42.906829 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 24 23:34:42.906902 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 24 23:34:42.906971 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 24 23:34:42.907038 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 24 23:34:42.907104 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 24 23:34:42.907174 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 24 23:34:42.907254 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 24 23:34:42.907334 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 24 23:34:42.907403 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 24 23:34:42.907482 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 24 23:34:42.907550 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 24 23:34:42.907628 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 24 23:34:42.907704 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 24 23:34:42.907781 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 24 23:34:42.907860 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 24 23:34:42.907927 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 24 23:34:42.907993 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 24 23:34:42.908059 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 24 23:34:42.908134 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 24 23:34:42.908204 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 24 23:34:42.908311 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 24 23:34:42.908394 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 24 23:34:42.908463 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 24 23:34:42.908532 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 24 23:34:42.910696 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 24 23:34:42.910787 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 24 23:34:42.910869 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 24 23:34:42.910939 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 24 23:34:42.911004 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 24 23:34:42.911097 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 24 23:34:42.911170 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 24 23:34:42.911279 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 24 23:34:42.911355 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 24 23:34:42.911420 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 24 23:34:42.911486 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 24 23:34:42.911564 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 24 23:34:42.911656 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 24 23:34:42.911736 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 24 23:34:42.911804 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 24 23:34:42.911872 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 24 23:34:42.911937 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 24 23:34:42.911998 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 24 23:34:42.912071 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 24 23:34:42.912134 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 24 23:34:42.912199 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 24 23:34:42.912296 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 24 23:34:42.912362 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 24 23:34:42.912423 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 24 23:34:42.912492 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 24 23:34:42.912554 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 24 23:34:42.913756 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 24 23:34:42.913842 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 24 23:34:42.913905 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 24 23:34:42.913981 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 24 23:34:42.914048 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 24 23:34:42.914111 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 24 23:34:42.914172 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 24 23:34:42.914291 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 24 23:34:42.914361 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 24 23:34:42.914433 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 24 23:34:42.914502 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 24 23:34:42.914568 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 24 23:34:42.914679 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 24 23:34:42.914759 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 24 23:34:42.914822 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 24 23:34:42.914883 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 24 23:34:42.914951 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 24 23:34:42.915019 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 24 23:34:42.915086 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 24 23:34:42.915096 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 24 23:34:42.915105 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 24 23:34:42.915113 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 24 23:34:42.915121 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 24 23:34:42.915129 kernel: iommu: Default domain type: Translated Apr 24 23:34:42.915137 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 24 23:34:42.915145 kernel: efivars: Registered efivars operations Apr 24 23:34:42.915154 kernel: vgaarb: loaded Apr 24 23:34:42.915163 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 24 23:34:42.915171 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 23:34:42.915179 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 23:34:42.915187 kernel: pnp: PnP ACPI init Apr 24 23:34:42.915278 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 24 23:34:42.915290 kernel: pnp: PnP ACPI: found 1 devices Apr 24 23:34:42.915298 kernel: NET: Registered PF_INET protocol family Apr 24 23:34:42.915307 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 23:34:42.915317 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 23:34:42.915325 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 23:34:42.915333 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 23:34:42.915341 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 23:34:42.915349 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 23:34:42.915357 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:34:42.915365 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 23:34:42.915373 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 23:34:42.915457 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 24 23:34:42.915472 kernel: PCI: CLS 0 bytes, default 64 Apr 24 23:34:42.915480 kernel: kvm [1]: HYP mode not available Apr 24 23:34:42.915488 kernel: Initialise system trusted keyrings Apr 24 23:34:42.915495 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 23:34:42.915503 kernel: Key type asymmetric registered Apr 24 23:34:42.915511 kernel: Asymmetric key parser 'x509' registered Apr 24 23:34:42.915519 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 23:34:42.915527 kernel: io scheduler mq-deadline registered Apr 24 23:34:42.915535 kernel: io scheduler kyber registered Apr 24 23:34:42.915544 kernel: io scheduler bfq registered Apr 24 23:34:42.915553 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 24 23:34:42.917682 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 24 23:34:42.917783 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 24 23:34:42.917853 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.917929 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 24 23:34:42.917998 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 24 23:34:42.918078 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.918149 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 24 23:34:42.918222 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 24 23:34:42.918307 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.918379 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 24 23:34:42.918450 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 24 23:34:42.918527 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.918670 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 24 23:34:42.918744 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 24 23:34:42.918815 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.918897 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 24 23:34:42.918969 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 24 23:34:42.919035 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.919110 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 24 23:34:42.919185 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 24 23:34:42.919292 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.919372 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 24 23:34:42.919447 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 24 23:34:42.919515 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.919527 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 24 23:34:42.919616 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 24 23:34:42.919688 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 24 23:34:42.919758 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 24 23:34:42.919775 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 24 23:34:42.919787 kernel: ACPI: button: Power Button [PWRB] Apr 24 23:34:42.919795 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 24 23:34:42.919874 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 24 23:34:42.919947 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 24 23:34:42.919963 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 23:34:42.919976 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 24 23:34:42.920045 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 24 23:34:42.920057 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 24 23:34:42.920065 kernel: thunder_xcv, ver 1.0 Apr 24 23:34:42.920075 kernel: thunder_bgx, ver 1.0 Apr 24 23:34:42.920083 kernel: nicpf, ver 1.0 Apr 24 23:34:42.920091 kernel: nicvf, ver 1.0 Apr 24 23:34:42.920174 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 24 23:34:42.920250 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-24T23:34:42 UTC (1777073682) Apr 24 23:34:42.920261 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 24 23:34:42.920270 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 24 23:34:42.920277 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 24 23:34:42.920288 kernel: watchdog: Hard watchdog permanently disabled Apr 24 23:34:42.920296 kernel: NET: Registered PF_INET6 protocol family Apr 24 23:34:42.920304 kernel: Segment Routing with IPv6 Apr 24 23:34:42.920311 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 23:34:42.920319 kernel: NET: Registered PF_PACKET protocol family Apr 24 23:34:42.920327 kernel: Key type dns_resolver registered Apr 24 23:34:42.920335 kernel: registered taskstats version 1 Apr 24 23:34:42.920343 kernel: Loading compiled-in X.509 certificates Apr 24 23:34:42.920351 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 96a6e7da7ac9a3ef656057ccd8e13f251b310c24' Apr 24 23:34:42.920361 kernel: Key type .fscrypt registered Apr 24 23:34:42.920368 kernel: Key type fscrypt-provisioning registered Apr 24 23:34:42.920376 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 23:34:42.920384 kernel: ima: Allocated hash algorithm: sha1 Apr 24 23:34:42.920391 kernel: ima: No architecture policies found Apr 24 23:34:42.920400 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 24 23:34:42.920407 kernel: clk: Disabling unused clocks Apr 24 23:34:42.920415 kernel: Freeing unused kernel memory: 39424K Apr 24 23:34:42.920423 kernel: Run /init as init process Apr 24 23:34:42.920438 kernel: with arguments: Apr 24 23:34:42.920446 kernel: /init Apr 24 23:34:42.920454 kernel: with environment: Apr 24 23:34:42.920461 kernel: HOME=/ Apr 24 23:34:42.920469 kernel: TERM=linux Apr 24 23:34:42.920479 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:34:42.920489 systemd[1]: Detected virtualization kvm. Apr 24 23:34:42.920497 systemd[1]: Detected architecture arm64. Apr 24 23:34:42.920507 systemd[1]: Running in initrd. Apr 24 23:34:42.920515 systemd[1]: No hostname configured, using default hostname. Apr 24 23:34:42.920523 systemd[1]: Hostname set to . Apr 24 23:34:42.920531 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:34:42.920539 systemd[1]: Queued start job for default target initrd.target. Apr 24 23:34:42.920547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:42.920556 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:42.920565 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 23:34:42.920578 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:34:42.920627 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 23:34:42.920639 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 23:34:42.920649 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 23:34:42.920657 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 23:34:42.920672 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:42.920689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:42.920700 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:34:42.920709 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:34:42.920751 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:34:42.920759 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:34:42.920768 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:34:42.920776 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:34:42.920785 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 23:34:42.920793 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 24 23:34:42.920804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:42.920823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:42.920832 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:42.920840 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:34:42.920849 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 23:34:42.920857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:34:42.920865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 23:34:42.920874 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 23:34:42.920883 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:34:42.920897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:34:42.920906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:42.920914 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 23:34:42.920922 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:42.920955 systemd-journald[238]: Collecting audit messages is disabled. Apr 24 23:34:42.920977 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 23:34:42.920987 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:34:42.920995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 23:34:42.921005 kernel: Bridge firewalling registered Apr 24 23:34:42.921014 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:42.921023 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:42.921031 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:42.921041 systemd-journald[238]: Journal started Apr 24 23:34:42.921060 systemd-journald[238]: Runtime Journal (/run/log/journal/b6695b297eef494292e6772580446ba9) is 8.0M, max 76.6M, 68.6M free. Apr 24 23:34:42.882853 systemd-modules-load[239]: Inserted module 'overlay' Apr 24 23:34:42.904121 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 24 23:34:42.926609 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:34:42.926650 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:34:42.931616 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:42.939118 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:34:42.943928 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:34:42.949703 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:42.952729 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 23:34:42.955568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:42.958106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:42.968042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:42.976863 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:34:42.979697 dracut-cmdline[268]: dracut-dracut-053 Apr 24 23:34:42.980941 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=63304dd98a277d4592d17e0085ae3f91ca70cc8ec6dedfdd357a1e9755f9a8b3 Apr 24 23:34:43.008332 systemd-resolved[278]: Positive Trust Anchors: Apr 24 23:34:43.008347 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:34:43.008379 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:34:43.018708 systemd-resolved[278]: Defaulting to hostname 'linux'. Apr 24 23:34:43.019808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:34:43.022074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:43.055638 kernel: SCSI subsystem initialized Apr 24 23:34:43.060634 kernel: Loading iSCSI transport class v2.0-870. Apr 24 23:34:43.068685 kernel: iscsi: registered transport (tcp) Apr 24 23:34:43.081634 kernel: iscsi: registered transport (qla4xxx) Apr 24 23:34:43.081739 kernel: QLogic iSCSI HBA Driver Apr 24 23:34:43.128779 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 23:34:43.135852 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 23:34:43.159851 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 23:34:43.159932 kernel: device-mapper: uevent: version 1.0.3 Apr 24 23:34:43.159944 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 24 23:34:43.212625 kernel: raid6: neonx8 gen() 15566 MB/s Apr 24 23:34:43.229632 kernel: raid6: neonx4 gen() 13398 MB/s Apr 24 23:34:43.246629 kernel: raid6: neonx2 gen() 13056 MB/s Apr 24 23:34:43.263627 kernel: raid6: neonx1 gen() 10054 MB/s Apr 24 23:34:43.280664 kernel: raid6: int64x8 gen() 6940 MB/s Apr 24 23:34:43.297622 kernel: raid6: int64x4 gen() 7303 MB/s Apr 24 23:34:43.314635 kernel: raid6: int64x2 gen() 6074 MB/s Apr 24 23:34:43.331634 kernel: raid6: int64x1 gen() 5024 MB/s Apr 24 23:34:43.331716 kernel: raid6: using algorithm neonx8 gen() 15566 MB/s Apr 24 23:34:43.348673 kernel: raid6: .... xor() 11901 MB/s, rmw enabled Apr 24 23:34:43.348764 kernel: raid6: using neon recovery algorithm Apr 24 23:34:43.353629 kernel: xor: measuring software checksum speed Apr 24 23:34:43.353698 kernel: 8regs : 19745 MB/sec Apr 24 23:34:43.353716 kernel: 32regs : 19664 MB/sec Apr 24 23:34:43.354619 kernel: arm64_neon : 26936 MB/sec Apr 24 23:34:43.354653 kernel: xor: using function: arm64_neon (26936 MB/sec) Apr 24 23:34:43.404652 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 23:34:43.419500 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:34:43.430909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:43.443644 systemd-udevd[457]: Using default interface naming scheme 'v255'. Apr 24 23:34:43.447127 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:43.458965 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 23:34:43.477500 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Apr 24 23:34:43.514873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:34:43.520914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:34:43.573244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:43.581914 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 23:34:43.601942 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 23:34:43.605779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:34:43.606452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:43.609319 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:34:43.615914 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 23:34:43.639680 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:34:43.678611 kernel: scsi host0: Virtio SCSI HBA Apr 24 23:34:43.685675 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 24 23:34:43.688364 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 23:34:43.702905 kernel: ACPI: bus type USB registered Apr 24 23:34:43.703026 kernel: usbcore: registered new interface driver usbfs Apr 24 23:34:43.703057 kernel: usbcore: registered new interface driver hub Apr 24 23:34:43.704063 kernel: usbcore: registered new device driver usb Apr 24 23:34:43.717963 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:34:43.718088 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:43.720353 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:43.720978 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:43.722709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:43.723879 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:43.732902 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:43.748614 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 24 23:34:43.751601 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 24 23:34:43.751800 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 24 23:34:43.751813 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 24 23:34:43.751919 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 24 23:34:43.752617 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 24 23:34:43.752744 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 24 23:34:43.755601 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 24 23:34:43.755755 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 24 23:34:43.755717 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:43.758393 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 24 23:34:43.761763 kernel: hub 1-0:1.0: USB hub found Apr 24 23:34:43.761976 kernel: hub 1-0:1.0: 4 ports detected Apr 24 23:34:43.764223 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 24 23:34:43.766849 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 23:34:43.770670 kernel: hub 2-0:1.0: USB hub found Apr 24 23:34:43.770871 kernel: hub 2-0:1.0: 4 ports detected Apr 24 23:34:43.782795 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 24 23:34:43.782990 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 24 23:34:43.784601 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 24 23:34:43.784760 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 24 23:34:43.784849 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 23:34:43.789999 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 23:34:43.790058 kernel: GPT:17805311 != 80003071 Apr 24 23:34:43.790069 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 23:34:43.790885 kernel: GPT:17805311 != 80003071 Apr 24 23:34:43.790917 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 23:34:43.791655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:43.793617 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 24 23:34:43.794039 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:43.825701 kernel: BTRFS: device fsid 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (530) Apr 24 23:34:43.833537 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 23:34:43.846634 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (522) Apr 24 23:34:43.856264 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 23:34:43.864336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:34:43.869103 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 23:34:43.869937 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 23:34:43.881828 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 23:34:43.890782 disk-uuid[579]: Primary Header is updated. Apr 24 23:34:43.890782 disk-uuid[579]: Secondary Entries is updated. Apr 24 23:34:43.890782 disk-uuid[579]: Secondary Header is updated. Apr 24 23:34:43.898613 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:43.904648 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:43.909623 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:44.005920 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 24 23:34:44.145015 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 24 23:34:44.145110 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 24 23:34:44.145501 kernel: usbcore: registered new interface driver usbhid Apr 24 23:34:44.145531 kernel: usbhid: USB HID core driver Apr 24 23:34:44.251626 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 24 23:34:44.379622 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 24 23:34:44.433640 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 24 23:34:44.912671 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 23:34:44.913936 disk-uuid[580]: The operation has completed successfully. Apr 24 23:34:44.967960 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 23:34:44.968728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 23:34:44.979880 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 23:34:45.000979 sh[597]: Success Apr 24 23:34:45.015744 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 24 23:34:45.067044 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 23:34:45.068081 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 23:34:45.071340 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 23:34:45.099722 kernel: BTRFS info (device dm-0): first mount of filesystem 5f4cf890-f9e2-4e04-aa84-1bcfb6e5643e Apr 24 23:34:45.099778 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:34:45.100831 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 24 23:34:45.100864 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 24 23:34:45.101603 kernel: BTRFS info (device dm-0): using free space tree Apr 24 23:34:45.107607 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 24 23:34:45.109922 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 23:34:45.111953 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 23:34:45.118905 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 23:34:45.124812 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 23:34:45.134025 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:34:45.134104 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:34:45.134129 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:45.140623 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:45.140695 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:45.152930 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:34:45.152626 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 24 23:34:45.161662 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 23:34:45.169873 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 23:34:45.265255 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:34:45.269366 ignition[690]: Ignition 2.19.0 Apr 24 23:34:45.269376 ignition[690]: Stage: fetch-offline Apr 24 23:34:45.274035 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:34:45.269413 ignition[690]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:45.276610 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:34:45.269422 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:45.269574 ignition[690]: parsed url from cmdline: "" Apr 24 23:34:45.269578 ignition[690]: no config URL provided Apr 24 23:34:45.269596 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:34:45.269604 ignition[690]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:34:45.269610 ignition[690]: failed to fetch config: resource requires networking Apr 24 23:34:45.269792 ignition[690]: Ignition finished successfully Apr 24 23:34:45.298131 systemd-networkd[784]: lo: Link UP Apr 24 23:34:45.298143 systemd-networkd[784]: lo: Gained carrier Apr 24 23:34:45.299856 systemd-networkd[784]: Enumeration completed Apr 24 23:34:45.300024 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:34:45.300352 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:45.300356 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:45.301117 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:45.301120 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:45.301700 systemd-networkd[784]: eth0: Link UP Apr 24 23:34:45.301703 systemd-networkd[784]: eth0: Gained carrier Apr 24 23:34:45.301711 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:45.303841 systemd[1]: Reached target network.target - Network. Apr 24 23:34:45.309120 systemd-networkd[784]: eth1: Link UP Apr 24 23:34:45.309124 systemd-networkd[784]: eth1: Gained carrier Apr 24 23:34:45.309135 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:45.309852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 23:34:45.338252 ignition[787]: Ignition 2.19.0 Apr 24 23:34:45.338264 ignition[787]: Stage: fetch Apr 24 23:34:45.338455 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:45.338465 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:45.338559 ignition[787]: parsed url from cmdline: "" Apr 24 23:34:45.338563 ignition[787]: no config URL provided Apr 24 23:34:45.338567 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 23:34:45.338575 ignition[787]: no config at "/usr/lib/ignition/user.ign" Apr 24 23:34:45.338614 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 24 23:34:45.339333 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 23:34:45.355704 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 24 23:34:45.359732 systemd-networkd[784]: eth0: DHCPv4 address 178.105.26.190/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 24 23:34:45.539513 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 24 23:34:45.546614 ignition[787]: GET result: OK Apr 24 23:34:45.546744 ignition[787]: parsing config with SHA512: bead1242c919596288c2a86038e8c965b4b126819ec1858325fb3c2707dd0475eb0e43ee016f56dd41785484a973c76f40820eac64e720410b3d7cc8589522fa Apr 24 23:34:45.551772 unknown[787]: fetched base config from "system" Apr 24 23:34:45.551781 unknown[787]: fetched base config from "system" Apr 24 23:34:45.552114 ignition[787]: fetch: fetch complete Apr 24 23:34:45.551789 unknown[787]: fetched user config from "hetzner" Apr 24 23:34:45.552119 ignition[787]: fetch: fetch passed Apr 24 23:34:45.552156 ignition[787]: Ignition finished successfully Apr 24 23:34:45.555615 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 23:34:45.561897 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 23:34:45.576818 ignition[794]: Ignition 2.19.0 Apr 24 23:34:45.576827 ignition[794]: Stage: kargs Apr 24 23:34:45.576991 ignition[794]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:45.577000 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:45.578054 ignition[794]: kargs: kargs passed Apr 24 23:34:45.581295 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 23:34:45.578104 ignition[794]: Ignition finished successfully Apr 24 23:34:45.592851 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 23:34:45.607031 ignition[801]: Ignition 2.19.0 Apr 24 23:34:45.607044 ignition[801]: Stage: disks Apr 24 23:34:45.607247 ignition[801]: no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:45.607258 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:45.610016 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 23:34:45.608313 ignition[801]: disks: disks passed Apr 24 23:34:45.612786 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 23:34:45.608371 ignition[801]: Ignition finished successfully Apr 24 23:34:45.615211 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 23:34:45.615913 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:34:45.616429 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:34:45.617824 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:34:45.624839 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 23:34:45.642543 systemd-fsck[809]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 24 23:34:45.647336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 23:34:45.656724 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 23:34:45.704676 kernel: EXT4-fs (sda9): mounted filesystem edaa698b-3baa-4242-8691-64cb9f35f18f r/w with ordered data mode. Quota mode: none. Apr 24 23:34:45.706393 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 23:34:45.708261 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 23:34:45.716854 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:34:45.723848 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 23:34:45.726968 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 24 23:34:45.730230 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 23:34:45.734569 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (817) Apr 24 23:34:45.734664 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:34:45.730267 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:34:45.737131 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:34:45.737165 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:45.736688 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 23:34:45.742853 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 23:34:45.747404 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:45.747433 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:45.749896 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:34:45.791650 coreos-metadata[819]: Apr 24 23:34:45.791 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 24 23:34:45.793074 coreos-metadata[819]: Apr 24 23:34:45.793 INFO Fetch successful Apr 24 23:34:45.795280 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 23:34:45.797155 coreos-metadata[819]: Apr 24 23:34:45.796 INFO wrote hostname ci-4081-3-6-n-3eeab28b3a to /sysroot/etc/hostname Apr 24 23:34:45.798549 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:34:45.802706 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Apr 24 23:34:45.807454 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 23:34:45.812123 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 23:34:45.911757 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 23:34:45.918736 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 23:34:45.921814 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 23:34:45.931739 kernel: BTRFS info (device sda6): last unmount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:34:45.954150 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 23:34:45.957543 ignition[935]: INFO : Ignition 2.19.0 Apr 24 23:34:45.957543 ignition[935]: INFO : Stage: mount Apr 24 23:34:45.957543 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:45.957543 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:45.960403 ignition[935]: INFO : mount: mount passed Apr 24 23:34:45.960403 ignition[935]: INFO : Ignition finished successfully Apr 24 23:34:45.960554 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 23:34:45.978046 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 23:34:46.100191 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 23:34:46.107939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 23:34:46.117621 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Apr 24 23:34:46.119845 kernel: BTRFS info (device sda6): first mount of filesystem 7d1fb622-285b-4375-96d6-a0d989283452 Apr 24 23:34:46.119890 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 24 23:34:46.119921 kernel: BTRFS info (device sda6): using free space tree Apr 24 23:34:46.126078 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 23:34:46.126142 kernel: BTRFS info (device sda6): auto enabling async discard Apr 24 23:34:46.130474 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 23:34:46.154428 ignition[963]: INFO : Ignition 2.19.0 Apr 24 23:34:46.154428 ignition[963]: INFO : Stage: files Apr 24 23:34:46.155675 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:46.155675 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:46.157809 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Apr 24 23:34:46.159648 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 23:34:46.159648 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 23:34:46.163723 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 23:34:46.166289 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 23:34:46.166289 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 23:34:46.166289 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:34:46.166289 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 24 23:34:46.164135 unknown[963]: wrote ssh authorized keys file for user: core Apr 24 23:34:46.296617 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 23:34:46.376645 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 24 23:34:46.376645 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:34:46.379152 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 24 23:34:46.520717 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 23:34:46.840770 systemd-networkd[784]: eth1: Gained IPv6LL Apr 24 23:34:47.096726 systemd-networkd[784]: eth0: Gained IPv6LL Apr 24 23:34:47.118324 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 24 23:34:47.118324 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 23:34:47.121079 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:34:47.121079 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 23:34:47.121079 ignition[963]: INFO : files: files passed Apr 24 23:34:47.121079 ignition[963]: INFO : Ignition finished successfully Apr 24 23:34:47.122785 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 23:34:47.132207 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 23:34:47.134745 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 23:34:47.138893 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 23:34:47.141670 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 23:34:47.165637 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:47.165637 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:47.168407 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 23:34:47.171372 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:34:47.172660 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 23:34:47.185995 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 23:34:47.222456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 23:34:47.223614 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 23:34:47.226163 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 23:34:47.227163 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 23:34:47.228667 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 23:34:47.235848 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 23:34:47.248022 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:34:47.261915 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 23:34:47.275560 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:47.276416 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:47.278005 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 23:34:47.279567 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 23:34:47.279714 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 23:34:47.281284 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 23:34:47.282015 systemd[1]: Stopped target basic.target - Basic System. Apr 24 23:34:47.283131 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 23:34:47.284285 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 23:34:47.285285 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 23:34:47.286400 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 23:34:47.287535 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 23:34:47.288833 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 23:34:47.289918 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 23:34:47.291121 systemd[1]: Stopped target swap.target - Swaps. Apr 24 23:34:47.292118 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 23:34:47.292259 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 23:34:47.293678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:47.294361 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:47.295469 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 23:34:47.298659 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:47.299393 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 23:34:47.299519 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 23:34:47.302450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 23:34:47.302570 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 23:34:47.304088 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 23:34:47.304180 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 23:34:47.305286 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 24 23:34:47.305388 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 24 23:34:47.314013 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 23:34:47.319282 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 23:34:47.319828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 23:34:47.319962 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:47.324687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 23:34:47.324795 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 23:34:47.331117 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 23:34:47.331235 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 23:34:47.338738 ignition[1015]: INFO : Ignition 2.19.0 Apr 24 23:34:47.338738 ignition[1015]: INFO : Stage: umount Apr 24 23:34:47.338738 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 23:34:47.338738 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 24 23:34:47.346108 ignition[1015]: INFO : umount: umount passed Apr 24 23:34:47.346108 ignition[1015]: INFO : Ignition finished successfully Apr 24 23:34:47.345551 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 23:34:47.347525 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 23:34:47.347670 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 23:34:47.351703 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 23:34:47.352320 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 23:34:47.353745 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 23:34:47.353796 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 23:34:47.355190 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 23:34:47.355276 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 23:34:47.357312 systemd[1]: Stopped target network.target - Network. Apr 24 23:34:47.358477 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 23:34:47.358537 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 23:34:47.363032 systemd[1]: Stopped target paths.target - Path Units. Apr 24 23:34:47.365525 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 23:34:47.369663 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:47.371330 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 23:34:47.373403 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 23:34:47.375016 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 23:34:47.375066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 23:34:47.377734 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 23:34:47.377776 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 23:34:47.378321 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 23:34:47.378368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 23:34:47.378993 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 23:34:47.379032 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 23:34:47.379784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 23:34:47.380690 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 23:34:47.383639 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 23:34:47.383731 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 23:34:47.384777 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 23:34:47.384864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 23:34:47.385858 systemd-networkd[784]: eth1: DHCPv6 lease lost Apr 24 23:34:47.390287 systemd-networkd[784]: eth0: DHCPv6 lease lost Apr 24 23:34:47.392903 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 23:34:47.393051 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 23:34:47.396657 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 23:34:47.396729 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:47.404792 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 23:34:47.405325 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 23:34:47.405390 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 23:34:47.408682 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:47.410455 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 23:34:47.411570 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 23:34:47.424667 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 23:34:47.424778 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:47.425523 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 23:34:47.425569 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:47.427101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 23:34:47.427150 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:47.428524 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 23:34:47.430791 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:47.432730 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 23:34:47.432830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 23:34:47.434633 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 23:34:47.434703 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:47.436138 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 23:34:47.436174 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:47.437818 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 23:34:47.437944 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 23:34:47.439572 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 23:34:47.439634 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 23:34:47.441806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 23:34:47.441851 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 23:34:47.450078 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 23:34:47.450773 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 23:34:47.450834 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:47.451534 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 24 23:34:47.451578 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:47.452258 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 23:34:47.452302 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:47.453725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:47.453779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:47.462687 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 23:34:47.462910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 23:34:47.465052 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 23:34:47.471819 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 23:34:47.482040 systemd[1]: Switching root. Apr 24 23:34:47.515629 systemd-journald[238]: Journal stopped Apr 24 23:34:48.382268 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 24 23:34:48.382353 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 23:34:48.382367 kernel: SELinux: policy capability open_perms=1 Apr 24 23:34:48.382382 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 23:34:48.382392 kernel: SELinux: policy capability always_check_network=0 Apr 24 23:34:48.382402 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 23:34:48.382412 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 23:34:48.382422 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 23:34:48.382432 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 23:34:48.382442 systemd[1]: Successfully loaded SELinux policy in 36.619ms. Apr 24 23:34:48.382466 kernel: audit: type=1403 audit(1777073687.636:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 23:34:48.382476 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.049ms. Apr 24 23:34:48.382490 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 24 23:34:48.382502 systemd[1]: Detected virtualization kvm. Apr 24 23:34:48.382513 systemd[1]: Detected architecture arm64. Apr 24 23:34:48.382523 systemd[1]: Detected first boot. Apr 24 23:34:48.382535 systemd[1]: Hostname set to . Apr 24 23:34:48.382545 systemd[1]: Initializing machine ID from VM UUID. Apr 24 23:34:48.382556 zram_generator::config[1058]: No configuration found. Apr 24 23:34:48.382572 systemd[1]: Populated /etc with preset unit settings. Apr 24 23:34:48.382954 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 23:34:48.382985 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 23:34:48.382997 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 23:34:48.383008 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 23:34:48.383019 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 23:34:48.383030 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 23:34:48.383040 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 23:34:48.383050 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 23:34:48.383065 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 23:34:48.383076 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 23:34:48.383087 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 23:34:48.383097 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 23:34:48.383135 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 23:34:48.383148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 23:34:48.383159 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 23:34:48.383170 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 23:34:48.383181 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 23:34:48.383194 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 24 23:34:48.383212 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 23:34:48.383226 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 23:34:48.383237 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 23:34:48.383247 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 23:34:48.383258 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 23:34:48.383270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 23:34:48.383286 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 23:34:48.383296 systemd[1]: Reached target slices.target - Slice Units. Apr 24 23:34:48.383306 systemd[1]: Reached target swap.target - Swaps. Apr 24 23:34:48.383317 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 23:34:48.383327 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 23:34:48.383338 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 23:34:48.383348 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 23:34:48.385235 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 23:34:48.385262 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 23:34:48.385279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 23:34:48.385290 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 23:34:48.385301 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 23:34:48.385312 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 23:34:48.385323 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 23:34:48.385334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 23:34:48.385346 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 23:34:48.385357 systemd[1]: Reached target machines.target - Containers. Apr 24 23:34:48.385369 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 23:34:48.385380 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:48.385395 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 23:34:48.385407 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 23:34:48.385418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:34:48.385430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:34:48.385440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:34:48.385451 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 23:34:48.385461 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:34:48.385472 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 23:34:48.385483 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 23:34:48.385493 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 23:34:48.385503 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 23:34:48.385514 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 23:34:48.385526 kernel: fuse: init (API version 7.39) Apr 24 23:34:48.385537 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 23:34:48.385548 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 23:34:48.385559 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 23:34:48.385569 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 23:34:48.385597 kernel: loop: module loaded Apr 24 23:34:48.385611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 23:34:48.385622 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 23:34:48.385633 systemd[1]: Stopped verity-setup.service. Apr 24 23:34:48.385645 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 23:34:48.385657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 23:34:48.385667 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 23:34:48.385679 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 23:34:48.385690 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 23:34:48.385702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 23:34:48.385713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 23:34:48.385724 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 23:34:48.385735 kernel: ACPI: bus type drm_connector registered Apr 24 23:34:48.385745 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 23:34:48.385755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:34:48.385766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:34:48.385810 systemd-journald[1121]: Collecting audit messages is disabled. Apr 24 23:34:48.385836 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:34:48.385847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:34:48.385865 systemd-journald[1121]: Journal started Apr 24 23:34:48.385888 systemd-journald[1121]: Runtime Journal (/run/log/journal/b6695b297eef494292e6772580446ba9) is 8.0M, max 76.6M, 68.6M free. Apr 24 23:34:48.121624 systemd[1]: Queued start job for default target multi-user.target. Apr 24 23:34:48.145639 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 23:34:48.146114 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 23:34:48.387821 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 23:34:48.388693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:34:48.388850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:34:48.390322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 23:34:48.390675 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 23:34:48.392965 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:34:48.393114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:34:48.394399 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 23:34:48.395440 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 23:34:48.396475 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 23:34:48.416556 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 23:34:48.425750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 23:34:48.430730 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 23:34:48.433696 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 23:34:48.433752 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 23:34:48.435444 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 24 23:34:48.444808 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 23:34:48.451803 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 23:34:48.452896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:48.455780 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 23:34:48.465785 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 23:34:48.466494 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:34:48.467791 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 23:34:48.471698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:34:48.472950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 23:34:48.476722 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 23:34:48.480119 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 23:34:48.482289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 23:34:48.483579 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 23:34:48.485237 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 23:34:48.497990 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 23:34:48.512649 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 23:34:48.513914 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 23:34:48.521534 systemd-journald[1121]: Time spent on flushing to /var/log/journal/b6695b297eef494292e6772580446ba9 is 92.063ms for 1128 entries. Apr 24 23:34:48.521534 systemd-journald[1121]: System Journal (/var/log/journal/b6695b297eef494292e6772580446ba9) is 8.0M, max 584.8M, 576.8M free. Apr 24 23:34:48.627778 systemd-journald[1121]: Received client request to flush runtime journal. Apr 24 23:34:48.627830 kernel: loop0: detected capacity change from 0 to 209336 Apr 24 23:34:48.627844 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 23:34:48.627856 kernel: loop1: detected capacity change from 0 to 8 Apr 24 23:34:48.523536 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 24 23:34:48.525644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 23:34:48.531330 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 24 23:34:48.582734 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 23:34:48.588361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 23:34:48.590273 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 24 23:34:48.605835 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Apr 24 23:34:48.605846 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Apr 24 23:34:48.613768 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 24 23:34:48.617402 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 23:34:48.628781 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 23:34:48.637248 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 23:34:48.659606 kernel: loop2: detected capacity change from 0 to 114328 Apr 24 23:34:48.680699 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 23:34:48.702690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 23:34:48.707576 kernel: loop3: detected capacity change from 0 to 114432 Apr 24 23:34:48.729571 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 24 23:34:48.729605 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Apr 24 23:34:48.735070 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 23:34:48.756625 kernel: loop4: detected capacity change from 0 to 209336 Apr 24 23:34:48.772610 kernel: loop5: detected capacity change from 0 to 8 Apr 24 23:34:48.779655 kernel: loop6: detected capacity change from 0 to 114328 Apr 24 23:34:48.801840 kernel: loop7: detected capacity change from 0 to 114432 Apr 24 23:34:48.822991 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 24 23:34:48.823889 (sd-merge)[1201]: Merged extensions into '/usr'. Apr 24 23:34:48.830700 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 23:34:48.830719 systemd[1]: Reloading... Apr 24 23:34:48.948651 zram_generator::config[1226]: No configuration found. Apr 24 23:34:49.000815 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 23:34:49.082284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:34:49.130004 systemd[1]: Reloading finished in 298 ms. Apr 24 23:34:49.155036 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 23:34:49.156193 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 23:34:49.167879 systemd[1]: Starting ensure-sysext.service... Apr 24 23:34:49.171462 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 23:34:49.178754 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Apr 24 23:34:49.178863 systemd[1]: Reloading... Apr 24 23:34:49.219041 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 23:34:49.219328 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 23:34:49.219999 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 23:34:49.220278 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 24 23:34:49.220328 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Apr 24 23:34:49.227807 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:34:49.227822 systemd-tmpfiles[1265]: Skipping /boot Apr 24 23:34:49.247578 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 23:34:49.247606 systemd-tmpfiles[1265]: Skipping /boot Apr 24 23:34:49.269622 zram_generator::config[1291]: No configuration found. Apr 24 23:34:49.384820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:34:49.431755 systemd[1]: Reloading finished in 252 ms. Apr 24 23:34:49.452577 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 23:34:49.453796 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 23:34:49.473880 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:34:49.478189 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 23:34:49.490870 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 23:34:49.502828 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 23:34:49.508520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 23:34:49.510958 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 23:34:49.521985 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 23:34:49.524523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:49.536977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:34:49.542916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:34:49.550037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:34:49.550912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:49.551788 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 23:34:49.561875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:34:49.562064 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:34:49.563885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:49.564110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:49.573943 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 23:34:49.580724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:49.585876 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:34:49.600032 augenrules[1359]: No rules Apr 24 23:34:49.613374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 23:34:49.614179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:49.614822 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 23:34:49.617664 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:34:49.620099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 23:34:49.621313 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 23:34:49.624535 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 23:34:49.633731 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Apr 24 23:34:49.640118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:34:49.640341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:34:49.643230 systemd[1]: Finished ensure-sysext.service. Apr 24 23:34:49.644970 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:34:49.645636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:34:49.648034 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:34:49.648180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:34:49.649768 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 23:34:49.650725 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 23:34:49.658268 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:34:49.658391 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:34:49.667483 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 23:34:49.668298 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:34:49.675564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 23:34:49.694932 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 23:34:49.738153 systemd-resolved[1342]: Positive Trust Anchors: Apr 24 23:34:49.738174 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 23:34:49.738218 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 23:34:49.743656 systemd-resolved[1342]: Using system hostname 'ci-4081-3-6-n-3eeab28b3a'. Apr 24 23:34:49.745812 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 23:34:49.746548 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 23:34:49.793653 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 23:34:49.794559 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 23:34:49.808398 systemd-networkd[1387]: lo: Link UP Apr 24 23:34:49.808849 systemd-networkd[1387]: lo: Gained carrier Apr 24 23:34:49.809413 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 24 23:34:49.812727 systemd-networkd[1387]: Enumeration completed Apr 24 23:34:49.812832 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 23:34:49.814191 systemd[1]: Reached target network.target - Network. Apr 24 23:34:49.814879 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:49.814883 systemd-networkd[1387]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:49.816744 systemd-networkd[1387]: eth1: Link UP Apr 24 23:34:49.816748 systemd-networkd[1387]: eth1: Gained carrier Apr 24 23:34:49.816765 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:49.820780 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 23:34:49.852775 systemd-networkd[1387]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 24 23:34:49.854236 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:49.871406 systemd-networkd[1387]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:49.879621 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 23:34:49.907642 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1400) Apr 24 23:34:49.919009 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:49.919125 systemd-networkd[1387]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 23:34:49.920102 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:49.920675 systemd-networkd[1387]: eth0: Link UP Apr 24 23:34:49.920750 systemd-networkd[1387]: eth0: Gained carrier Apr 24 23:34:49.920832 systemd-networkd[1387]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 23:34:49.925895 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:49.946903 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 24 23:34:49.949392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 23:34:49.955774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 23:34:49.959965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 23:34:49.964429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 23:34:49.965917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 23:34:49.965960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 23:34:49.966344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 23:34:49.966510 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 23:34:49.974064 systemd-networkd[1387]: eth0: DHCPv4 address 178.105.26.190/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 24 23:34:49.974461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 23:34:49.975080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 23:34:49.975677 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:49.976889 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 23:34:49.977303 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:49.990882 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 23:34:49.991898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 23:34:49.993977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 23:34:50.026634 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 24 23:34:50.029922 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 24 23:34:50.029992 kernel: [drm] features: -context_init Apr 24 23:34:50.038634 kernel: [drm] number of scanouts: 1 Apr 24 23:34:50.038731 kernel: [drm] number of cap sets: 0 Apr 24 23:34:50.040694 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 24 23:34:50.044922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:50.047610 kernel: Console: switching to colour frame buffer device 160x50 Apr 24 23:34:50.060628 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 24 23:34:50.062181 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 23:34:50.069951 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 23:34:50.074498 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 23:34:50.075655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:50.084774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 23:34:50.087192 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 23:34:50.155758 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 23:34:50.198651 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 24 23:34:50.205506 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 24 23:34:50.222646 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:34:50.248710 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 24 23:34:50.249962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 23:34:50.251806 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 23:34:50.252831 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 23:34:50.253917 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 23:34:50.255246 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 23:34:50.256110 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 23:34:50.256966 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 23:34:50.257782 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 23:34:50.257819 systemd[1]: Reached target paths.target - Path Units. Apr 24 23:34:50.258474 systemd[1]: Reached target timers.target - Timer Units. Apr 24 23:34:50.262631 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 23:34:50.264895 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 23:34:50.272388 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 23:34:50.275233 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 24 23:34:50.278512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 23:34:50.280660 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 23:34:50.281362 systemd[1]: Reached target basic.target - Basic System. Apr 24 23:34:50.282106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:34:50.282219 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 23:34:50.290822 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 23:34:50.295149 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 24 23:34:50.301771 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 23:34:50.307363 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 23:34:50.312740 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 23:34:50.320780 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 23:34:50.321386 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 23:34:50.325782 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 23:34:50.328644 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 23:34:50.337121 coreos-metadata[1451]: Apr 24 23:34:50.330 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 24 23:34:50.337121 coreos-metadata[1451]: Apr 24 23:34:50.333 INFO Fetch successful Apr 24 23:34:50.337121 coreos-metadata[1451]: Apr 24 23:34:50.333 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 24 23:34:50.337121 coreos-metadata[1451]: Apr 24 23:34:50.336 INFO Fetch successful Apr 24 23:34:50.333187 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 24 23:34:50.338716 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 23:34:50.342435 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 23:34:50.355454 jq[1453]: false Apr 24 23:34:50.357940 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 23:34:50.359347 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 23:34:50.359836 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 23:34:50.362401 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 23:34:50.367882 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 23:34:50.371608 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 24 23:34:50.375946 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 23:34:50.376456 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 23:34:50.405075 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 23:34:50.406628 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 23:34:50.431523 jq[1465]: true Apr 24 23:34:50.439509 extend-filesystems[1456]: Found loop4 Apr 24 23:34:50.439509 extend-filesystems[1456]: Found loop5 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found loop6 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found loop7 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda1 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda2 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda3 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found usr Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda4 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda6 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda7 Apr 24 23:34:50.440566 extend-filesystems[1456]: Found sda9 Apr 24 23:34:50.440566 extend-filesystems[1456]: Checking size of /dev/sda9 Apr 24 23:34:50.484780 tar[1468]: linux-arm64/LICENSE Apr 24 23:34:50.484780 tar[1468]: linux-arm64/helm Apr 24 23:34:50.485012 extend-filesystems[1456]: Resized partition /dev/sda9 Apr 24 23:34:50.441054 dbus-daemon[1452]: [system] SELinux support is enabled Apr 24 23:34:50.441720 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 23:34:50.497636 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Apr 24 23:34:50.448722 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 23:34:50.501336 jq[1487]: true Apr 24 23:34:50.448764 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 23:34:50.458745 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 23:34:50.510570 update_engine[1463]: I20260424 23:34:50.506084 1463 main.cc:92] Flatcar Update Engine starting Apr 24 23:34:50.510851 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 24 23:34:50.458768 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 23:34:50.468166 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 23:34:50.481008 systemd-logind[1462]: New seat seat0. Apr 24 23:34:50.489799 systemd-logind[1462]: Watching system buttons on /dev/input/event0 (Power Button) Apr 24 23:34:50.489815 systemd-logind[1462]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 24 23:34:50.490012 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 23:34:50.505304 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 23:34:50.505490 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 23:34:50.522354 update_engine[1463]: I20260424 23:34:50.522145 1463 update_check_scheduler.cc:74] Next update check in 11m40s Apr 24 23:34:50.522495 systemd[1]: Started update-engine.service - Update Engine. Apr 24 23:34:50.526983 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 23:34:50.546670 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 23:34:50.549275 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 23:34:50.616210 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:34:50.615578 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 23:34:50.627606 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1395) Apr 24 23:34:50.631087 systemd[1]: Starting sshkeys.service... Apr 24 23:34:50.659492 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 23:34:50.680834 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 23:34:50.696955 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 24 23:34:50.714181 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 23:34:50.714181 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 24 23:34:50.714181 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 24 23:34:50.722668 extend-filesystems[1456]: Resized filesystem in /dev/sda9 Apr 24 23:34:50.722668 extend-filesystems[1456]: Found sr0 Apr 24 23:34:50.718119 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 23:34:50.729992 coreos-metadata[1528]: Apr 24 23:34:50.727 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 24 23:34:50.718335 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 23:34:50.731400 coreos-metadata[1528]: Apr 24 23:34:50.731 INFO Fetch successful Apr 24 23:34:50.738802 unknown[1528]: wrote ssh authorized keys file for user: core Apr 24 23:34:50.749521 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 23:34:50.779986 update-ssh-keys[1538]: Updated "/home/core/.ssh/authorized_keys" Apr 24 23:34:50.781908 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 23:34:50.786939 systemd[1]: Finished sshkeys.service. Apr 24 23:34:50.808614 containerd[1485]: time="2026-04-24T23:34:50.808516120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 24 23:34:50.838477 containerd[1485]: time="2026-04-24T23:34:50.838249360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.840135 containerd[1485]: time="2026-04-24T23:34:50.840097560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840228480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840251120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840415040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840432320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840489760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840502120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840677000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840693920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840707280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840719080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840794320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841511 containerd[1485]: time="2026-04-24T23:34:50.840995880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841809 containerd[1485]: time="2026-04-24T23:34:50.841096680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 24 23:34:50.841809 containerd[1485]: time="2026-04-24T23:34:50.841110840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 24 23:34:50.841809 containerd[1485]: time="2026-04-24T23:34:50.841182200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 24 23:34:50.841809 containerd[1485]: time="2026-04-24T23:34:50.841279840Z" level=info msg="metadata content store policy set" policy=shared Apr 24 23:34:50.846031 containerd[1485]: time="2026-04-24T23:34:50.845996280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 24 23:34:50.846183 containerd[1485]: time="2026-04-24T23:34:50.846160560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 24 23:34:50.846292 containerd[1485]: time="2026-04-24T23:34:50.846271800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 24 23:34:50.846406 containerd[1485]: time="2026-04-24T23:34:50.846377120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 24 23:34:50.846527 containerd[1485]: time="2026-04-24T23:34:50.846503920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 24 23:34:50.846823 containerd[1485]: time="2026-04-24T23:34:50.846794360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 24 23:34:50.847353 containerd[1485]: time="2026-04-24T23:34:50.847319280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 24 23:34:50.847512 containerd[1485]: time="2026-04-24T23:34:50.847468400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 24 23:34:50.847512 containerd[1485]: time="2026-04-24T23:34:50.847495880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 24 23:34:50.847512 containerd[1485]: time="2026-04-24T23:34:50.847511000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847527240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847541160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847554280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847568640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847604040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847620160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847633520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847657 containerd[1485]: time="2026-04-24T23:34:50.847645560Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847666120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847681200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847693800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847707480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847719880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847733320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847746080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847759640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847772680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847791520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847804000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847815520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847829800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847852280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 24 23:34:50.847880 containerd[1485]: time="2026-04-24T23:34:50.847873760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.848330 containerd[1485]: time="2026-04-24T23:34:50.847885640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.848330 containerd[1485]: time="2026-04-24T23:34:50.847896800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 24 23:34:50.848610 containerd[1485]: time="2026-04-24T23:34:50.848558040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 24 23:34:50.853670 containerd[1485]: time="2026-04-24T23:34:50.853625040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 24 23:34:50.853670 containerd[1485]: time="2026-04-24T23:34:50.853661520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 24 23:34:50.853793 containerd[1485]: time="2026-04-24T23:34:50.853696680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 24 23:34:50.853793 containerd[1485]: time="2026-04-24T23:34:50.853708760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.853793 containerd[1485]: time="2026-04-24T23:34:50.853729160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 24 23:34:50.853793 containerd[1485]: time="2026-04-24T23:34:50.853740840Z" level=info msg="NRI interface is disabled by configuration." Apr 24 23:34:50.853793 containerd[1485]: time="2026-04-24T23:34:50.853752840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 24 23:34:50.854207 containerd[1485]: time="2026-04-24T23:34:50.854126240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 24 23:34:50.854207 containerd[1485]: time="2026-04-24T23:34:50.854205560Z" level=info msg="Connect containerd service" Apr 24 23:34:50.854408 containerd[1485]: time="2026-04-24T23:34:50.854255360Z" level=info msg="using legacy CRI server" Apr 24 23:34:50.854408 containerd[1485]: time="2026-04-24T23:34:50.854264440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 23:34:50.854408 containerd[1485]: time="2026-04-24T23:34:50.854369000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 24 23:34:50.856829 containerd[1485]: time="2026-04-24T23:34:50.856781600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 23:34:50.857210 containerd[1485]: time="2026-04-24T23:34:50.857108040Z" level=info msg="Start subscribing containerd event" Apr 24 23:34:50.857210 containerd[1485]: time="2026-04-24T23:34:50.857173240Z" level=info msg="Start recovering state" Apr 24 23:34:50.857605 containerd[1485]: time="2026-04-24T23:34:50.857366080Z" level=info msg="Start event monitor" Apr 24 23:34:50.857605 containerd[1485]: time="2026-04-24T23:34:50.857386400Z" level=info msg="Start snapshots syncer" Apr 24 23:34:50.857605 containerd[1485]: time="2026-04-24T23:34:50.857397480Z" level=info msg="Start cni network conf syncer for default" Apr 24 23:34:50.857605 containerd[1485]: time="2026-04-24T23:34:50.857407200Z" level=info msg="Start streaming server" Apr 24 23:34:50.857605 containerd[1485]: time="2026-04-24T23:34:50.857367200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 23:34:50.857814 containerd[1485]: time="2026-04-24T23:34:50.857794840Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 23:34:50.857947 containerd[1485]: time="2026-04-24T23:34:50.857933040Z" level=info msg="containerd successfully booted in 0.050771s" Apr 24 23:34:50.858034 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 23:34:50.872824 systemd-networkd[1387]: eth1: Gained IPv6LL Apr 24 23:34:50.873536 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:50.878979 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 23:34:50.880327 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 23:34:50.891115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:34:50.898427 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 23:34:50.954336 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 23:34:51.276595 tar[1468]: linux-arm64/README.md Apr 24 23:34:51.288949 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 23:34:51.750767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:34:51.753898 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:34:51.768773 systemd-networkd[1387]: eth0: Gained IPv6LL Apr 24 23:34:51.770085 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection. Apr 24 23:34:51.866818 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 23:34:51.889318 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 23:34:51.902613 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 23:34:51.907216 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 23:34:51.908367 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 23:34:51.914885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 23:34:51.932573 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 23:34:51.943132 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 23:34:51.947765 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 24 23:34:51.949442 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 23:34:51.951435 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 23:34:51.952319 systemd[1]: Startup finished in 767ms (kernel) + 4.951s (initrd) + 4.352s (userspace) = 10.070s. Apr 24 23:34:52.289900 kubelet[1564]: E0424 23:34:52.289799 1564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:34:52.293181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:34:52.293439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:35:02.543834 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 23:35:02.557711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:02.689036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:02.702680 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:35:02.751092 kubelet[1599]: E0424 23:35:02.751026 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:35:02.755005 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:35:02.755266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:35:12.829785 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 23:35:12.838893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:12.959903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:12.965176 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:35:13.003425 kubelet[1615]: E0424 23:35:13.003378 1615 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:35:13.006453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:35:13.006621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:35:22.300781 systemd-resolved[1342]: Clock change detected. Flushing caches. Apr 24 23:35:22.301036 systemd-timesyncd[1379]: Contacted time server 5.45.97.204:123 (2.flatcar.pool.ntp.org). Apr 24 23:35:22.301138 systemd-timesyncd[1379]: Initial clock synchronization to Fri 2026-04-24 23:35:22.300700 UTC. Apr 24 23:35:23.507307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 24 23:35:23.514597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:23.648091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:23.661012 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:35:23.709710 kubelet[1630]: E0424 23:35:23.709647 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:35:23.713278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:35:23.713612 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:35:33.757364 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 24 23:35:33.765167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:33.804562 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 23:35:33.813647 systemd[1]: Started sshd@0-178.105.26.190:22-50.85.169.122:50788.service - OpenSSH per-connection server daemon (50.85.169.122:50788). Apr 24 23:35:33.911555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:33.915717 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:35:33.952028 sshd[1641]: Accepted publickey for core from 50.85.169.122 port 50788 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:33.952591 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:33.966090 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 23:35:33.972371 kubelet[1648]: E0424 23:35:33.972290 1648 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:35:33.972593 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 23:35:33.975925 systemd-logind[1462]: New session 1 of user core. Apr 24 23:35:33.976989 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:35:33.977109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:35:33.989749 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 23:35:33.996887 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 23:35:34.001877 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 23:35:34.110799 systemd[1657]: Queued start job for default target default.target. Apr 24 23:35:34.122123 systemd[1657]: Created slice app.slice - User Application Slice. Apr 24 23:35:34.122450 systemd[1657]: Reached target paths.target - Paths. Apr 24 23:35:34.122487 systemd[1657]: Reached target timers.target - Timers. Apr 24 23:35:34.125296 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 23:35:34.154418 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 23:35:34.154538 systemd[1657]: Reached target sockets.target - Sockets. Apr 24 23:35:34.154551 systemd[1657]: Reached target basic.target - Basic System. Apr 24 23:35:34.154589 systemd[1657]: Reached target default.target - Main User Target. Apr 24 23:35:34.154614 systemd[1657]: Startup finished in 146ms. Apr 24 23:35:34.155070 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 23:35:34.166021 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 23:35:34.292097 systemd[1]: Started sshd@1-178.105.26.190:22-50.85.169.122:50800.service - OpenSSH per-connection server daemon (50.85.169.122:50800). Apr 24 23:35:34.404356 sshd[1668]: Accepted publickey for core from 50.85.169.122 port 50800 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:34.406670 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:34.412869 systemd-logind[1462]: New session 2 of user core. Apr 24 23:35:34.418689 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 23:35:34.519397 sshd[1668]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:34.524517 systemd[1]: sshd@1-178.105.26.190:22-50.85.169.122:50800.service: Deactivated successfully. Apr 24 23:35:34.526699 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 23:35:34.529283 systemd-logind[1462]: Session 2 logged out. Waiting for processes to exit. Apr 24 23:35:34.531024 systemd-logind[1462]: Removed session 2. Apr 24 23:35:34.553810 systemd[1]: Started sshd@2-178.105.26.190:22-50.85.169.122:50816.service - OpenSSH per-connection server daemon (50.85.169.122:50816). Apr 24 23:35:34.677591 sshd[1675]: Accepted publickey for core from 50.85.169.122 port 50816 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:34.680472 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:34.686912 systemd-logind[1462]: New session 3 of user core. Apr 24 23:35:34.692820 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 23:35:34.788930 sshd[1675]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:34.793609 systemd[1]: sshd@2-178.105.26.190:22-50.85.169.122:50816.service: Deactivated successfully. Apr 24 23:35:34.795896 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 23:35:34.798995 systemd-logind[1462]: Session 3 logged out. Waiting for processes to exit. Apr 24 23:35:34.800126 systemd-logind[1462]: Removed session 3. Apr 24 23:35:34.816855 systemd[1]: Started sshd@3-178.105.26.190:22-50.85.169.122:50822.service - OpenSSH per-connection server daemon (50.85.169.122:50822). Apr 24 23:35:34.941966 sshd[1682]: Accepted publickey for core from 50.85.169.122 port 50822 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:34.943254 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:34.947905 systemd-logind[1462]: New session 4 of user core. Apr 24 23:35:34.960649 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 23:35:35.063075 sshd[1682]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:35.070592 systemd-logind[1462]: Session 4 logged out. Waiting for processes to exit. Apr 24 23:35:35.071637 systemd[1]: sshd@3-178.105.26.190:22-50.85.169.122:50822.service: Deactivated successfully. Apr 24 23:35:35.073563 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 23:35:35.074862 systemd-logind[1462]: Removed session 4. Apr 24 23:35:35.101859 systemd[1]: Started sshd@4-178.105.26.190:22-50.85.169.122:50826.service - OpenSSH per-connection server daemon (50.85.169.122:50826). Apr 24 23:35:35.226236 sshd[1689]: Accepted publickey for core from 50.85.169.122 port 50826 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:35.228386 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:35.234935 systemd-logind[1462]: New session 5 of user core. Apr 24 23:35:35.241633 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 23:35:35.335767 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 23:35:35.336062 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:35:35.355633 sudo[1692]: pam_unix(sudo:session): session closed for user root Apr 24 23:35:35.372983 sshd[1689]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:35.380580 systemd[1]: sshd@4-178.105.26.190:22-50.85.169.122:50826.service: Deactivated successfully. Apr 24 23:35:35.383892 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 23:35:35.384930 systemd-logind[1462]: Session 5 logged out. Waiting for processes to exit. Apr 24 23:35:35.386324 systemd-logind[1462]: Removed session 5. Apr 24 23:35:35.409719 systemd[1]: Started sshd@5-178.105.26.190:22-50.85.169.122:50836.service - OpenSSH per-connection server daemon (50.85.169.122:50836). Apr 24 23:35:35.547360 sshd[1697]: Accepted publickey for core from 50.85.169.122 port 50836 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:35.549592 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:35.554477 systemd-logind[1462]: New session 6 of user core. Apr 24 23:35:35.566677 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 23:35:35.652066 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 23:35:35.652406 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:35:35.657665 sudo[1701]: pam_unix(sudo:session): session closed for user root Apr 24 23:35:35.663968 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 24 23:35:35.664253 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:35:35.678945 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 24 23:35:35.694961 auditctl[1704]: No rules Apr 24 23:35:35.696879 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 23:35:35.697277 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 24 23:35:35.709382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 24 23:35:35.736766 augenrules[1722]: No rules Apr 24 23:35:35.738625 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 24 23:35:35.740221 sudo[1700]: pam_unix(sudo:session): session closed for user root Apr 24 23:35:35.757451 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 24 23:35:35.763758 systemd-logind[1462]: Session 6 logged out. Waiting for processes to exit. Apr 24 23:35:35.763854 systemd[1]: sshd@5-178.105.26.190:22-50.85.169.122:50836.service: Deactivated successfully. Apr 24 23:35:35.766063 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 23:35:35.769475 systemd-logind[1462]: Removed session 6. Apr 24 23:35:35.788772 systemd[1]: Started sshd@6-178.105.26.190:22-50.85.169.122:50850.service - OpenSSH per-connection server daemon (50.85.169.122:50850). Apr 24 23:35:35.913664 sshd[1730]: Accepted publickey for core from 50.85.169.122 port 50850 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:35:35.915230 sshd[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:35:35.921111 systemd-logind[1462]: New session 7 of user core. Apr 24 23:35:35.927649 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 23:35:36.013055 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 23:35:36.013372 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 23:35:36.310431 update_engine[1463]: I20260424 23:35:36.310382 1463 update_attempter.cc:509] Updating boot flags... Apr 24 23:35:36.314990 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 23:35:36.315234 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 23:35:36.357381 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1758) Apr 24 23:35:36.444818 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1761) Apr 24 23:35:36.614771 dockerd[1748]: time="2026-04-24T23:35:36.614601535Z" level=info msg="Starting up" Apr 24 23:35:36.710417 dockerd[1748]: time="2026-04-24T23:35:36.710351655Z" level=info msg="Loading containers: start." Apr 24 23:35:36.804075 kernel: Initializing XFRM netlink socket Apr 24 23:35:36.881798 systemd-networkd[1387]: docker0: Link UP Apr 24 23:35:36.899097 dockerd[1748]: time="2026-04-24T23:35:36.899022775Z" level=info msg="Loading containers: done." Apr 24 23:35:36.913018 dockerd[1748]: time="2026-04-24T23:35:36.912962335Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 23:35:36.913179 dockerd[1748]: time="2026-04-24T23:35:36.913083095Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 24 23:35:36.913255 dockerd[1748]: time="2026-04-24T23:35:36.913219655Z" level=info msg="Daemon has completed initialization" Apr 24 23:35:36.948444 dockerd[1748]: time="2026-04-24T23:35:36.948172455Z" level=info msg="API listen on /run/docker.sock" Apr 24 23:35:36.948470 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 23:35:37.446337 containerd[1485]: time="2026-04-24T23:35:37.446270255Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 23:35:38.003699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665340906.mount: Deactivated successfully. Apr 24 23:35:39.018361 containerd[1485]: time="2026-04-24T23:35:39.017020055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:39.018765 containerd[1485]: time="2026-04-24T23:35:39.018539335Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 24 23:35:39.019580 containerd[1485]: time="2026-04-24T23:35:39.019546815Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:39.023073 containerd[1485]: time="2026-04-24T23:35:39.023040295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:39.024352 containerd[1485]: time="2026-04-24T23:35:39.024301135Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.57798272s" Apr 24 23:35:39.024448 containerd[1485]: time="2026-04-24T23:35:39.024431255Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 24 23:35:39.025184 containerd[1485]: time="2026-04-24T23:35:39.025145495Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 23:35:40.221183 containerd[1485]: time="2026-04-24T23:35:40.221128815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:40.222624 containerd[1485]: time="2026-04-24T23:35:40.222583295Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 24 23:35:40.223924 containerd[1485]: time="2026-04-24T23:35:40.223413295Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:40.227013 containerd[1485]: time="2026-04-24T23:35:40.226975575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:40.228430 containerd[1485]: time="2026-04-24T23:35:40.228394255Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.2031042s" Apr 24 23:35:40.228534 containerd[1485]: time="2026-04-24T23:35:40.228516415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 24 23:35:40.229218 containerd[1485]: time="2026-04-24T23:35:40.229184935Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 23:35:41.277925 containerd[1485]: time="2026-04-24T23:35:41.277872015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:41.280092 containerd[1485]: time="2026-04-24T23:35:41.280017375Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 24 23:35:41.280847 containerd[1485]: time="2026-04-24T23:35:41.280800335Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:41.284818 containerd[1485]: time="2026-04-24T23:35:41.284763415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:41.286198 containerd[1485]: time="2026-04-24T23:35:41.286062055Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.05669284s" Apr 24 23:35:41.286198 containerd[1485]: time="2026-04-24T23:35:41.286101055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 24 23:35:41.287094 containerd[1485]: time="2026-04-24T23:35:41.286805735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 23:35:42.143088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980231779.mount: Deactivated successfully. Apr 24 23:35:42.493485 containerd[1485]: time="2026-04-24T23:35:42.493284215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:42.494979 containerd[1485]: time="2026-04-24T23:35:42.494913815Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 24 23:35:42.495961 containerd[1485]: time="2026-04-24T23:35:42.495890615Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:42.498322 containerd[1485]: time="2026-04-24T23:35:42.498264455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:42.499758 containerd[1485]: time="2026-04-24T23:35:42.499681895Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.2128412s" Apr 24 23:35:42.499758 containerd[1485]: time="2026-04-24T23:35:42.499742015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 24 23:35:42.500399 containerd[1485]: time="2026-04-24T23:35:42.500361695Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 23:35:43.027324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2177180728.mount: Deactivated successfully. Apr 24 23:35:43.825993 containerd[1485]: time="2026-04-24T23:35:43.825936335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:43.827891 containerd[1485]: time="2026-04-24T23:35:43.827849775Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 24 23:35:43.829365 containerd[1485]: time="2026-04-24T23:35:43.828359335Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:43.832348 containerd[1485]: time="2026-04-24T23:35:43.831678295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:43.834046 containerd[1485]: time="2026-04-24T23:35:43.833041255Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.33264216s" Apr 24 23:35:43.834046 containerd[1485]: time="2026-04-24T23:35:43.833079855Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 24 23:35:43.834213 containerd[1485]: time="2026-04-24T23:35:43.834185735Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 23:35:44.007308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 24 23:35:44.015668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:44.148581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:44.167125 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 23:35:44.218046 kubelet[2037]: E0424 23:35:44.217662 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 23:35:44.221015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 23:35:44.221172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 23:35:44.315423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49852365.mount: Deactivated successfully. Apr 24 23:35:44.324569 containerd[1485]: time="2026-04-24T23:35:44.324022935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:44.325798 containerd[1485]: time="2026-04-24T23:35:44.325754815Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 24 23:35:44.327615 containerd[1485]: time="2026-04-24T23:35:44.326056175Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:44.329395 containerd[1485]: time="2026-04-24T23:35:44.329061815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:44.330367 containerd[1485]: time="2026-04-24T23:35:44.330303175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 496.08284ms" Apr 24 23:35:44.330439 containerd[1485]: time="2026-04-24T23:35:44.330368455Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 24 23:35:44.331269 containerd[1485]: time="2026-04-24T23:35:44.331237215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 23:35:44.796655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724199300.mount: Deactivated successfully. Apr 24 23:35:45.496198 containerd[1485]: time="2026-04-24T23:35:45.496120375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:45.498350 containerd[1485]: time="2026-04-24T23:35:45.498162015Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 24 23:35:45.499594 containerd[1485]: time="2026-04-24T23:35:45.499545775Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:45.504539 containerd[1485]: time="2026-04-24T23:35:45.504498295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:35:45.506179 containerd[1485]: time="2026-04-24T23:35:45.506050575Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.17477292s" Apr 24 23:35:45.506179 containerd[1485]: time="2026-04-24T23:35:45.506086575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 24 23:35:50.907195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:50.920058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:50.958239 systemd[1]: Reloading requested from client PID 2138 ('systemctl') (unit session-7.scope)... Apr 24 23:35:50.958438 systemd[1]: Reloading... Apr 24 23:35:51.072390 zram_generator::config[2179]: No configuration found. Apr 24 23:35:51.171739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:35:51.242541 systemd[1]: Reloading finished in 283 ms. Apr 24 23:35:51.303599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:51.305821 (kubelet)[2217]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:35:51.310188 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:51.311589 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:35:51.313354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:51.320051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:51.443548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:51.457858 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:35:51.506349 kubelet[2234]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:35:51.506349 kubelet[2234]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:35:51.506349 kubelet[2234]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:35:51.506349 kubelet[2234]: I0424 23:35:51.504382 2234 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:35:52.688364 kubelet[2234]: I0424 23:35:52.686346 2234 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:35:52.688364 kubelet[2234]: I0424 23:35:52.686396 2234 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:35:52.688364 kubelet[2234]: I0424 23:35:52.686931 2234 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:35:52.716527 kubelet[2234]: E0424 23:35:52.716482 2234 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://178.105.26.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 23:35:52.719322 kubelet[2234]: I0424 23:35:52.719292 2234 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:35:52.730387 kubelet[2234]: E0424 23:35:52.730310 2234 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:35:52.730559 kubelet[2234]: I0424 23:35:52.730545 2234 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:35:52.734186 kubelet[2234]: I0424 23:35:52.734153 2234 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:35:52.734610 kubelet[2234]: I0424 23:35:52.734570 2234 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:35:52.734841 kubelet[2234]: I0424 23:35:52.734689 2234 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-3eeab28b3a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:35:52.734968 kubelet[2234]: I0424 23:35:52.734954 2234 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:35:52.735023 kubelet[2234]: I0424 23:35:52.735015 2234 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:35:52.735259 kubelet[2234]: I0424 23:35:52.735245 2234 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:35:52.739123 kubelet[2234]: I0424 23:35:52.739095 2234 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:35:52.739238 kubelet[2234]: I0424 23:35:52.739226 2234 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:35:52.739320 kubelet[2234]: I0424 23:35:52.739310 2234 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:35:52.741408 kubelet[2234]: I0424 23:35:52.741388 2234 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:35:52.747177 kubelet[2234]: E0424 23:35:52.747138 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.26.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3eeab28b3a&limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:35:52.747679 kubelet[2234]: E0424 23:35:52.747610 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.26.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:35:52.747954 kubelet[2234]: I0424 23:35:52.747932 2234 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:35:52.748623 kubelet[2234]: I0424 23:35:52.748602 2234 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:35:52.748783 kubelet[2234]: W0424 23:35:52.748765 2234 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 23:35:52.752663 kubelet[2234]: I0424 23:35:52.752618 2234 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:35:52.752734 kubelet[2234]: I0424 23:35:52.752683 2234 server.go:1289] "Started kubelet" Apr 24 23:35:52.755923 kubelet[2234]: I0424 23:35:52.755884 2234 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:35:52.760174 kubelet[2234]: E0424 23:35:52.757702 2234 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://178.105.26.190:6443/api/v1/namespaces/default/events\": dial tcp 178.105.26.190:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-3eeab28b3a.18a96f29ac7dc2ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-3eeab28b3a,UID:ci-4081-3-6-n-3eeab28b3a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-3eeab28b3a,},FirstTimestamp:2026-04-24 23:35:52.752636655 +0000 UTC m=+1.289492321,LastTimestamp:2026-04-24 23:35:52.752636655 +0000 UTC m=+1.289492321,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-3eeab28b3a,}" Apr 24 23:35:52.761705 kubelet[2234]: I0424 23:35:52.761627 2234 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:35:52.762656 kubelet[2234]: I0424 23:35:52.762600 2234 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:35:52.763088 kubelet[2234]: I0424 23:35:52.763067 2234 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:35:52.763195 kubelet[2234]: I0424 23:35:52.763164 2234 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:35:52.763254 kubelet[2234]: I0424 23:35:52.763146 2234 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:35:52.763471 kubelet[2234]: E0424 23:35:52.763440 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:52.766227 kubelet[2234]: I0424 23:35:52.766179 2234 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:35:52.766832 kubelet[2234]: I0424 23:35:52.766795 2234 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:35:52.769226 kubelet[2234]: I0424 23:35:52.769192 2234 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:35:52.769226 kubelet[2234]: E0424 23:35:52.769197 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.26.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3eeab28b3a?timeout=10s\": dial tcp 178.105.26.190:6443: connect: connection refused" interval="200ms" Apr 24 23:35:52.769408 kubelet[2234]: I0424 23:35:52.769269 2234 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:35:52.770428 kubelet[2234]: I0424 23:35:52.769622 2234 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:35:52.770428 kubelet[2234]: I0424 23:35:52.769770 2234 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:35:52.771702 kubelet[2234]: I0424 23:35:52.771672 2234 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:35:52.785513 kubelet[2234]: I0424 23:35:52.785465 2234 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:35:52.785513 kubelet[2234]: I0424 23:35:52.785503 2234 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:35:52.785513 kubelet[2234]: I0424 23:35:52.785523 2234 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:35:52.785792 kubelet[2234]: I0424 23:35:52.785531 2234 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:35:52.785792 kubelet[2234]: E0424 23:35:52.785573 2234 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:35:52.791097 kubelet[2234]: E0424 23:35:52.791053 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.26.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:35:52.791097 kubelet[2234]: E0424 23:35:52.791226 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.26.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:35:52.796068 kubelet[2234]: E0424 23:35:52.796013 2234 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:35:52.803164 kubelet[2234]: I0424 23:35:52.803105 2234 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:35:52.803164 kubelet[2234]: I0424 23:35:52.803132 2234 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:35:52.803164 kubelet[2234]: I0424 23:35:52.803153 2234 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:35:52.806220 kubelet[2234]: I0424 23:35:52.806183 2234 policy_none.go:49] "None policy: Start" Apr 24 23:35:52.806220 kubelet[2234]: I0424 23:35:52.806221 2234 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:35:52.806399 kubelet[2234]: I0424 23:35:52.806239 2234 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:35:52.813257 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 23:35:52.826753 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 23:35:52.830671 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 23:35:52.839033 kubelet[2234]: E0424 23:35:52.838951 2234 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:35:52.839350 kubelet[2234]: I0424 23:35:52.839279 2234 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:35:52.839442 kubelet[2234]: I0424 23:35:52.839314 2234 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:35:52.840983 kubelet[2234]: I0424 23:35:52.840907 2234 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:35:52.844305 kubelet[2234]: E0424 23:35:52.844277 2234 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:35:52.844549 kubelet[2234]: E0424 23:35:52.844319 2234 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:52.901051 systemd[1]: Created slice kubepods-burstable-pod478323ceaeb55554082a1b5dff4e813b.slice - libcontainer container kubepods-burstable-pod478323ceaeb55554082a1b5dff4e813b.slice. Apr 24 23:35:52.910992 kubelet[2234]: E0424 23:35:52.910653 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:52.915841 systemd[1]: Created slice kubepods-burstable-pod67bfe0d3098204b9b2ccf4a311bb2beb.slice - libcontainer container kubepods-burstable-pod67bfe0d3098204b9b2ccf4a311bb2beb.slice. Apr 24 23:35:52.918886 kubelet[2234]: E0424 23:35:52.918699 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:52.920688 systemd[1]: Created slice kubepods-burstable-pod61dffa160d0592cde0efc1b1846b9de0.slice - libcontainer container kubepods-burstable-pod61dffa160d0592cde0efc1b1846b9de0.slice. Apr 24 23:35:52.922841 kubelet[2234]: E0424 23:35:52.922817 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:52.942219 kubelet[2234]: I0424 23:35:52.942118 2234 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:52.942584 kubelet[2234]: E0424 23:35:52.942520 2234 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.26.190:6443/api/v1/nodes\": dial tcp 178.105.26.190:6443: connect: connection refused" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:52.971365 kubelet[2234]: E0424 23:35:52.971191 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.26.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3eeab28b3a?timeout=10s\": dial tcp 178.105.26.190:6443: connect: connection refused" interval="400ms" Apr 24 23:35:53.071189 kubelet[2234]: I0424 23:35:53.071025 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071189 kubelet[2234]: I0424 23:35:53.071107 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071189 kubelet[2234]: I0424 23:35:53.071164 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071189 kubelet[2234]: I0424 23:35:53.071199 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071189 kubelet[2234]: I0424 23:35:53.071231 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61dffa160d0592cde0efc1b1846b9de0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-3eeab28b3a\" (UID: \"61dffa160d0592cde0efc1b1846b9de0\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071698 kubelet[2234]: I0424 23:35:53.071266 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/478323ceaeb55554082a1b5dff4e813b-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" (UID: \"478323ceaeb55554082a1b5dff4e813b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071698 kubelet[2234]: I0424 23:35:53.071297 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/478323ceaeb55554082a1b5dff4e813b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" (UID: \"478323ceaeb55554082a1b5dff4e813b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071698 kubelet[2234]: I0424 23:35:53.071365 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/478323ceaeb55554082a1b5dff4e813b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" (UID: \"478323ceaeb55554082a1b5dff4e813b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.071698 kubelet[2234]: I0424 23:35:53.071402 2234 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.145504 kubelet[2234]: I0424 23:35:53.145345 2234 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.145859 kubelet[2234]: E0424 23:35:53.145823 2234 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.26.190:6443/api/v1/nodes\": dial tcp 178.105.26.190:6443: connect: connection refused" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.213069 containerd[1485]: time="2026-04-24T23:35:53.212867975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-3eeab28b3a,Uid:478323ceaeb55554082a1b5dff4e813b,Namespace:kube-system,Attempt:0,}" Apr 24 23:35:53.219747 containerd[1485]: time="2026-04-24T23:35:53.219698455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-3eeab28b3a,Uid:67bfe0d3098204b9b2ccf4a311bb2beb,Namespace:kube-system,Attempt:0,}" Apr 24 23:35:53.224452 containerd[1485]: time="2026-04-24T23:35:53.224400055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-3eeab28b3a,Uid:61dffa160d0592cde0efc1b1846b9de0,Namespace:kube-system,Attempt:0,}" Apr 24 23:35:53.372289 kubelet[2234]: E0424 23:35:53.372240 2234 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://178.105.26.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-3eeab28b3a?timeout=10s\": dial tcp 178.105.26.190:6443: connect: connection refused" interval="800ms" Apr 24 23:35:53.549740 kubelet[2234]: I0424 23:35:53.549653 2234 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.551036 kubelet[2234]: E0424 23:35:53.550956 2234 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://178.105.26.190:6443/api/v1/nodes\": dial tcp 178.105.26.190:6443: connect: connection refused" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:53.558496 kubelet[2234]: E0424 23:35:53.558420 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://178.105.26.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 23:35:53.635666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900801382.mount: Deactivated successfully. Apr 24 23:35:53.642474 containerd[1485]: time="2026-04-24T23:35:53.642406815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:35:53.643889 containerd[1485]: time="2026-04-24T23:35:53.643846735Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:35:53.645317 containerd[1485]: time="2026-04-24T23:35:53.645274615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:35:53.645606 containerd[1485]: time="2026-04-24T23:35:53.645577975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 24 23:35:53.646364 containerd[1485]: time="2026-04-24T23:35:53.646282135Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:35:53.647516 containerd[1485]: time="2026-04-24T23:35:53.647452455Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:35:53.648573 containerd[1485]: time="2026-04-24T23:35:53.648529295Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 24 23:35:53.651551 containerd[1485]: time="2026-04-24T23:35:53.651444815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 23:35:53.653531 containerd[1485]: time="2026-04-24T23:35:53.653489655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 440.46236ms" Apr 24 23:35:53.655708 containerd[1485]: time="2026-04-24T23:35:53.654923815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 434.95956ms" Apr 24 23:35:53.658631 containerd[1485]: time="2026-04-24T23:35:53.658589935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 434.10864ms" Apr 24 23:35:53.794131 containerd[1485]: time="2026-04-24T23:35:53.793871575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:53.794131 containerd[1485]: time="2026-04-24T23:35:53.793935735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:53.794131 containerd[1485]: time="2026-04-24T23:35:53.793952295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:53.794131 containerd[1485]: time="2026-04-24T23:35:53.794048295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:53.796861 containerd[1485]: time="2026-04-24T23:35:53.796765895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:53.796861 containerd[1485]: time="2026-04-24T23:35:53.796820455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:53.797095 containerd[1485]: time="2026-04-24T23:35:53.797051695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:53.797275 containerd[1485]: time="2026-04-24T23:35:53.797232935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:53.802194 containerd[1485]: time="2026-04-24T23:35:53.802031535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:35:53.804086 containerd[1485]: time="2026-04-24T23:35:53.803863815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:35:53.804086 containerd[1485]: time="2026-04-24T23:35:53.803909215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:53.804086 containerd[1485]: time="2026-04-24T23:35:53.804035895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:35:53.821157 systemd[1]: Started cri-containerd-18b3b13788ad5a2b9df5e1d388d90d8d9c3b60527e4ff230ac8af0c9b22a0aea.scope - libcontainer container 18b3b13788ad5a2b9df5e1d388d90d8d9c3b60527e4ff230ac8af0c9b22a0aea. Apr 24 23:35:53.827412 systemd[1]: Started cri-containerd-ab0238027b8eeb7c5607afe42a50e703188abbdbbbdbe23924e47e1131e3d2bd.scope - libcontainer container ab0238027b8eeb7c5607afe42a50e703188abbdbbbdbe23924e47e1131e3d2bd. Apr 24 23:35:53.842101 systemd[1]: Started cri-containerd-4c98234f6177917dcd44b631d5bbb42094c0e502042a64a3204baacfdc7300d2.scope - libcontainer container 4c98234f6177917dcd44b631d5bbb42094c0e502042a64a3204baacfdc7300d2. Apr 24 23:35:53.842550 kubelet[2234]: E0424 23:35:53.842147 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://178.105.26.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-3eeab28b3a&limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 23:35:53.886343 containerd[1485]: time="2026-04-24T23:35:53.885104695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-3eeab28b3a,Uid:478323ceaeb55554082a1b5dff4e813b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab0238027b8eeb7c5607afe42a50e703188abbdbbbdbe23924e47e1131e3d2bd\"" Apr 24 23:35:53.893220 containerd[1485]: time="2026-04-24T23:35:53.893175975Z" level=info msg="CreateContainer within sandbox \"ab0238027b8eeb7c5607afe42a50e703188abbdbbbdbe23924e47e1131e3d2bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 23:35:53.896119 containerd[1485]: time="2026-04-24T23:35:53.896083975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-3eeab28b3a,Uid:61dffa160d0592cde0efc1b1846b9de0,Namespace:kube-system,Attempt:0,} returns sandbox id \"18b3b13788ad5a2b9df5e1d388d90d8d9c3b60527e4ff230ac8af0c9b22a0aea\"" Apr 24 23:35:53.898782 containerd[1485]: time="2026-04-24T23:35:53.898751655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-3eeab28b3a,Uid:67bfe0d3098204b9b2ccf4a311bb2beb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c98234f6177917dcd44b631d5bbb42094c0e502042a64a3204baacfdc7300d2\"" Apr 24 23:35:53.904410 containerd[1485]: time="2026-04-24T23:35:53.904371415Z" level=info msg="CreateContainer within sandbox \"18b3b13788ad5a2b9df5e1d388d90d8d9c3b60527e4ff230ac8af0c9b22a0aea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 23:35:53.906351 containerd[1485]: time="2026-04-24T23:35:53.905998255Z" level=info msg="CreateContainer within sandbox \"4c98234f6177917dcd44b631d5bbb42094c0e502042a64a3204baacfdc7300d2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 23:35:53.917409 containerd[1485]: time="2026-04-24T23:35:53.917262735Z" level=info msg="CreateContainer within sandbox \"ab0238027b8eeb7c5607afe42a50e703188abbdbbbdbe23924e47e1131e3d2bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f08d898673e3cba2878cfa08340a14cd0c7b2a0fec05f31ea530aaa5cb37ba8\"" Apr 24 23:35:53.919356 containerd[1485]: time="2026-04-24T23:35:53.918514695Z" level=info msg="StartContainer for \"3f08d898673e3cba2878cfa08340a14cd0c7b2a0fec05f31ea530aaa5cb37ba8\"" Apr 24 23:35:53.927584 containerd[1485]: time="2026-04-24T23:35:53.927536775Z" level=info msg="CreateContainer within sandbox \"4c98234f6177917dcd44b631d5bbb42094c0e502042a64a3204baacfdc7300d2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f7e6290a46e25464309abdbb8017f998807ec991b805fac4c1b1e11737ccdf61\"" Apr 24 23:35:53.928917 containerd[1485]: time="2026-04-24T23:35:53.928661855Z" level=info msg="StartContainer for \"f7e6290a46e25464309abdbb8017f998807ec991b805fac4c1b1e11737ccdf61\"" Apr 24 23:35:53.929705 containerd[1485]: time="2026-04-24T23:35:53.929618015Z" level=info msg="CreateContainer within sandbox \"18b3b13788ad5a2b9df5e1d388d90d8d9c3b60527e4ff230ac8af0c9b22a0aea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1443558bded8e75e701b1c3d613aef84b76e9893a057b83095a2df6710cf806\"" Apr 24 23:35:53.930046 containerd[1485]: time="2026-04-24T23:35:53.930014135Z" level=info msg="StartContainer for \"f1443558bded8e75e701b1c3d613aef84b76e9893a057b83095a2df6710cf806\"" Apr 24 23:35:53.958593 systemd[1]: Started cri-containerd-3f08d898673e3cba2878cfa08340a14cd0c7b2a0fec05f31ea530aaa5cb37ba8.scope - libcontainer container 3f08d898673e3cba2878cfa08340a14cd0c7b2a0fec05f31ea530aaa5cb37ba8. Apr 24 23:35:53.959734 systemd[1]: Started cri-containerd-f7e6290a46e25464309abdbb8017f998807ec991b805fac4c1b1e11737ccdf61.scope - libcontainer container f7e6290a46e25464309abdbb8017f998807ec991b805fac4c1b1e11737ccdf61. Apr 24 23:35:53.976513 systemd[1]: Started cri-containerd-f1443558bded8e75e701b1c3d613aef84b76e9893a057b83095a2df6710cf806.scope - libcontainer container f1443558bded8e75e701b1c3d613aef84b76e9893a057b83095a2df6710cf806. Apr 24 23:35:54.024085 containerd[1485]: time="2026-04-24T23:35:54.023902495Z" level=info msg="StartContainer for \"3f08d898673e3cba2878cfa08340a14cd0c7b2a0fec05f31ea530aaa5cb37ba8\" returns successfully" Apr 24 23:35:54.032314 containerd[1485]: time="2026-04-24T23:35:54.032102375Z" level=info msg="StartContainer for \"f7e6290a46e25464309abdbb8017f998807ec991b805fac4c1b1e11737ccdf61\" returns successfully" Apr 24 23:35:54.057542 containerd[1485]: time="2026-04-24T23:35:54.056990775Z" level=info msg="StartContainer for \"f1443558bded8e75e701b1c3d613aef84b76e9893a057b83095a2df6710cf806\" returns successfully" Apr 24 23:35:54.090063 kubelet[2234]: E0424 23:35:54.090014 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://178.105.26.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 23:35:54.118832 kubelet[2234]: E0424 23:35:54.118755 2234 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://178.105.26.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 178.105.26.190:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 23:35:54.353550 kubelet[2234]: I0424 23:35:54.353450 2234 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:54.809895 kubelet[2234]: E0424 23:35:54.809855 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:54.810226 kubelet[2234]: E0424 23:35:54.810183 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:54.813729 kubelet[2234]: E0424 23:35:54.813690 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:55.817411 kubelet[2234]: E0424 23:35:55.817209 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:55.818949 kubelet[2234]: E0424 23:35:55.818788 2234 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:55.863870 kubelet[2234]: E0424 23:35:55.863832 2234 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-3eeab28b3a\" not found" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:55.960909 kubelet[2234]: I0424 23:35:55.960708 2234 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:55.960909 kubelet[2234]: E0424 23:35:55.960763 2234 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-3eeab28b3a\": node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:55.980360 kubelet[2234]: E0424 23:35:55.978976 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:56.080153 kubelet[2234]: E0424 23:35:56.080028 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:56.180813 kubelet[2234]: E0424 23:35:56.180751 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:56.280940 kubelet[2234]: E0424 23:35:56.280896 2234 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:56.365553 kubelet[2234]: I0424 23:35:56.364838 2234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.373145 kubelet[2234]: E0424 23:35:56.373101 2234 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.373325 kubelet[2234]: I0424 23:35:56.373311 2234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.375899 kubelet[2234]: E0424 23:35:56.375687 2234 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.375899 kubelet[2234]: I0424 23:35:56.375715 2234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.378309 kubelet[2234]: E0424 23:35:56.378207 2234 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-3eeab28b3a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.751380 kubelet[2234]: I0424 23:35:56.749631 2234 apiserver.go:52] "Watching apiserver" Apr 24 23:35:56.769612 kubelet[2234]: I0424 23:35:56.769530 2234 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:35:56.819065 kubelet[2234]: I0424 23:35:56.816881 2234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:56.819500 kubelet[2234]: I0424 23:35:56.819402 2234 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:58.406919 systemd[1]: Reloading requested from client PID 2518 ('systemctl') (unit session-7.scope)... Apr 24 23:35:58.406937 systemd[1]: Reloading... Apr 24 23:35:58.499826 zram_generator::config[2558]: No configuration found. Apr 24 23:35:58.599428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 24 23:35:58.683078 systemd[1]: Reloading finished in 275 ms. Apr 24 23:35:58.721426 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:58.734472 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 23:35:58.734807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:58.734867 systemd[1]: kubelet.service: Consumed 1.707s CPU time, 129.3M memory peak, 0B memory swap peak. Apr 24 23:35:58.742741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 23:35:58.885151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 23:35:58.897774 (kubelet)[2603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 23:35:58.940672 kubelet[2603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:35:58.940672 kubelet[2603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 23:35:58.940672 kubelet[2603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 23:35:58.940672 kubelet[2603]: I0424 23:35:58.939904 2603 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 23:35:58.948547 kubelet[2603]: I0424 23:35:58.948459 2603 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 23:35:58.948740 kubelet[2603]: I0424 23:35:58.948724 2603 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 23:35:58.949246 kubelet[2603]: I0424 23:35:58.949224 2603 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 23:35:58.953304 kubelet[2603]: I0424 23:35:58.953248 2603 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 23:35:58.957235 kubelet[2603]: I0424 23:35:58.957093 2603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 23:35:58.961667 kubelet[2603]: E0424 23:35:58.961629 2603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 24 23:35:58.962065 kubelet[2603]: I0424 23:35:58.961826 2603 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 24 23:35:58.964632 kubelet[2603]: I0424 23:35:58.964072 2603 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 23:35:58.964632 kubelet[2603]: I0424 23:35:58.964268 2603 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 23:35:58.964632 kubelet[2603]: I0424 23:35:58.964291 2603 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-3eeab28b3a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 23:35:58.964632 kubelet[2603]: I0424 23:35:58.964503 2603 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 23:35:58.965402 kubelet[2603]: I0424 23:35:58.964520 2603 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 23:35:58.965402 kubelet[2603]: I0424 23:35:58.964569 2603 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:35:58.965402 kubelet[2603]: I0424 23:35:58.964736 2603 kubelet.go:480] "Attempting to sync node with API server" Apr 24 23:35:58.965402 kubelet[2603]: I0424 23:35:58.964750 2603 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 23:35:58.965402 kubelet[2603]: I0424 23:35:58.964771 2603 kubelet.go:386] "Adding apiserver pod source" Apr 24 23:35:58.965402 kubelet[2603]: I0424 23:35:58.964791 2603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 23:35:58.969666 kubelet[2603]: I0424 23:35:58.969438 2603 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 24 23:35:58.970527 kubelet[2603]: I0424 23:35:58.970091 2603 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 23:35:58.974703 kubelet[2603]: I0424 23:35:58.974677 2603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 23:35:58.974785 kubelet[2603]: I0424 23:35:58.974719 2603 server.go:1289] "Started kubelet" Apr 24 23:35:58.977075 kubelet[2603]: I0424 23:35:58.976417 2603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 23:35:58.978018 kubelet[2603]: I0424 23:35:58.977606 2603 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 23:35:58.978991 kubelet[2603]: I0424 23:35:58.978147 2603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 23:35:58.979347 kubelet[2603]: I0424 23:35:58.979313 2603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 23:35:58.980363 kubelet[2603]: I0424 23:35:58.979857 2603 server.go:317] "Adding debug handlers to kubelet server" Apr 24 23:35:58.989563 kubelet[2603]: I0424 23:35:58.989531 2603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 23:35:58.994338 kubelet[2603]: I0424 23:35:58.993405 2603 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 23:35:58.994935 kubelet[2603]: E0424 23:35:58.994899 2603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-3eeab28b3a\" not found" Apr 24 23:35:58.996891 kubelet[2603]: I0424 23:35:58.996848 2603 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 23:35:58.996991 kubelet[2603]: I0424 23:35:58.996983 2603 reconciler.go:26] "Reconciler: start to sync state" Apr 24 23:35:59.014066 kubelet[2603]: I0424 23:35:59.014027 2603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 23:35:59.015080 kubelet[2603]: I0424 23:35:59.015063 2603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 23:35:59.015182 kubelet[2603]: I0424 23:35:59.015173 2603 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 23:35:59.015243 kubelet[2603]: I0424 23:35:59.015234 2603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 23:35:59.015289 kubelet[2603]: I0424 23:35:59.015283 2603 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 23:35:59.015438 kubelet[2603]: E0424 23:35:59.015421 2603 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 23:35:59.022144 kubelet[2603]: I0424 23:35:59.022113 2603 factory.go:223] Registration of the systemd container factory successfully Apr 24 23:35:59.022434 kubelet[2603]: I0424 23:35:59.022412 2603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 23:35:59.037546 kubelet[2603]: I0424 23:35:59.037504 2603 factory.go:223] Registration of the containerd container factory successfully Apr 24 23:35:59.044282 kubelet[2603]: E0424 23:35:59.044253 2603 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 23:35:59.084717 kubelet[2603]: I0424 23:35:59.084692 2603 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 23:35:59.084842 kubelet[2603]: I0424 23:35:59.084830 2603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 23:35:59.085371 kubelet[2603]: I0424 23:35:59.085324 2603 state_mem.go:36] "Initialized new in-memory state store" Apr 24 23:35:59.086565 kubelet[2603]: I0424 23:35:59.086544 2603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 23:35:59.086670 kubelet[2603]: I0424 23:35:59.086646 2603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 23:35:59.086720 kubelet[2603]: I0424 23:35:59.086713 2603 policy_none.go:49] "None policy: Start" Apr 24 23:35:59.086767 kubelet[2603]: I0424 23:35:59.086760 2603 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 23:35:59.087374 kubelet[2603]: I0424 23:35:59.086822 2603 state_mem.go:35] "Initializing new in-memory state store" Apr 24 23:35:59.087374 kubelet[2603]: I0424 23:35:59.086923 2603 state_mem.go:75] "Updated machine memory state" Apr 24 23:35:59.092382 kubelet[2603]: E0424 23:35:59.092300 2603 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 23:35:59.092562 kubelet[2603]: I0424 23:35:59.092523 2603 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 23:35:59.092562 kubelet[2603]: I0424 23:35:59.092544 2603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 23:35:59.095032 kubelet[2603]: E0424 23:35:59.094903 2603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 23:35:59.097279 kubelet[2603]: I0424 23:35:59.097238 2603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 23:35:59.117423 kubelet[2603]: I0424 23:35:59.117021 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.117423 kubelet[2603]: I0424 23:35:59.117076 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.119520 kubelet[2603]: I0424 23:35:59.119383 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.128563 kubelet[2603]: E0424 23:35:59.128378 2603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.128926 kubelet[2603]: E0424 23:35:59.128854 2603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-3eeab28b3a\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.196875 kubelet[2603]: I0424 23:35:59.195802 2603 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.199877 kubelet[2603]: I0424 23:35:59.199612 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.199877 kubelet[2603]: I0424 23:35:59.199649 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61dffa160d0592cde0efc1b1846b9de0-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-3eeab28b3a\" (UID: \"61dffa160d0592cde0efc1b1846b9de0\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.199877 kubelet[2603]: I0424 23:35:59.199672 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/478323ceaeb55554082a1b5dff4e813b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" (UID: \"478323ceaeb55554082a1b5dff4e813b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.199877 kubelet[2603]: I0424 23:35:59.199693 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.199877 kubelet[2603]: I0424 23:35:59.199711 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.200085 kubelet[2603]: I0424 23:35:59.199727 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/478323ceaeb55554082a1b5dff4e813b-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" (UID: \"478323ceaeb55554082a1b5dff4e813b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.200085 kubelet[2603]: I0424 23:35:59.199740 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/478323ceaeb55554082a1b5dff4e813b-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" (UID: \"478323ceaeb55554082a1b5dff4e813b\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.200085 kubelet[2603]: I0424 23:35:59.199793 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.200085 kubelet[2603]: I0424 23:35:59.199815 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/67bfe0d3098204b9b2ccf4a311bb2beb-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-3eeab28b3a\" (UID: \"67bfe0d3098204b9b2ccf4a311bb2beb\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.211271 kubelet[2603]: I0424 23:35:59.211212 2603 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.211423 kubelet[2603]: I0424 23:35:59.211321 2603 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:35:59.965689 kubelet[2603]: I0424 23:35:59.965653 2603 apiserver.go:52] "Watching apiserver" Apr 24 23:35:59.997930 kubelet[2603]: I0424 23:35:59.997876 2603 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 23:36:00.071595 kubelet[2603]: I0424 23:36:00.071383 2603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:00.086787 kubelet[2603]: E0424 23:36:00.086458 2603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-3eeab28b3a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:00.128012 kubelet[2603]: I0424 23:36:00.127761 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-3eeab28b3a" podStartSLOduration=4.127734107 podStartE2EDuration="4.127734107s" podCreationTimestamp="2026-04-24 23:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:00.100741003 +0000 UTC m=+1.197188293" watchObservedRunningTime="2026-04-24 23:36:00.127734107 +0000 UTC m=+1.224181437" Apr 24 23:36:00.143496 kubelet[2603]: I0424 23:36:00.142436 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-3eeab28b3a" podStartSLOduration=1.142406245 podStartE2EDuration="1.142406245s" podCreationTimestamp="2026-04-24 23:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:00.127510433 +0000 UTC m=+1.223957763" watchObservedRunningTime="2026-04-24 23:36:00.142406245 +0000 UTC m=+1.238853535" Apr 24 23:36:00.158070 kubelet[2603]: I0424 23:36:00.157965 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-3eeab28b3a" podStartSLOduration=4.157938479 podStartE2EDuration="4.157938479s" podCreationTimestamp="2026-04-24 23:35:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:00.142654238 +0000 UTC m=+1.239101488" watchObservedRunningTime="2026-04-24 23:36:00.157938479 +0000 UTC m=+1.254385809" Apr 24 23:36:04.657652 kubelet[2603]: I0424 23:36:04.657605 2603 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 23:36:04.658945 kubelet[2603]: I0424 23:36:04.658621 2603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 23:36:04.659050 containerd[1485]: time="2026-04-24T23:36:04.658235180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 23:36:05.612102 systemd[1]: Created slice kubepods-besteffort-pod8bf2bfc3_5337_4344_950b_4f0c132f9591.slice - libcontainer container kubepods-besteffort-pod8bf2bfc3_5337_4344_950b_4f0c132f9591.slice. Apr 24 23:36:05.644741 kubelet[2603]: I0424 23:36:05.644690 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4tc\" (UniqueName: \"kubernetes.io/projected/8bf2bfc3-5337-4344-950b-4f0c132f9591-kube-api-access-tq4tc\") pod \"kube-proxy-6bjml\" (UID: \"8bf2bfc3-5337-4344-950b-4f0c132f9591\") " pod="kube-system/kube-proxy-6bjml" Apr 24 23:36:05.645002 kubelet[2603]: I0424 23:36:05.644977 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bf2bfc3-5337-4344-950b-4f0c132f9591-lib-modules\") pod \"kube-proxy-6bjml\" (UID: \"8bf2bfc3-5337-4344-950b-4f0c132f9591\") " pod="kube-system/kube-proxy-6bjml" Apr 24 23:36:05.645130 kubelet[2603]: I0424 23:36:05.645110 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8bf2bfc3-5337-4344-950b-4f0c132f9591-kube-proxy\") pod \"kube-proxy-6bjml\" (UID: \"8bf2bfc3-5337-4344-950b-4f0c132f9591\") " pod="kube-system/kube-proxy-6bjml" Apr 24 23:36:05.645256 kubelet[2603]: I0424 23:36:05.645237 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bf2bfc3-5337-4344-950b-4f0c132f9591-xtables-lock\") pod \"kube-proxy-6bjml\" (UID: \"8bf2bfc3-5337-4344-950b-4f0c132f9591\") " pod="kube-system/kube-proxy-6bjml" Apr 24 23:36:05.895433 systemd[1]: Created slice kubepods-besteffort-pod2245324b_aa83_45c9_8ee2_cb6f7a6e8639.slice - libcontainer container kubepods-besteffort-pod2245324b_aa83_45c9_8ee2_cb6f7a6e8639.slice. Apr 24 23:36:05.923998 containerd[1485]: time="2026-04-24T23:36:05.923956642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bjml,Uid:8bf2bfc3-5337-4344-950b-4f0c132f9591,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:05.947530 kubelet[2603]: I0424 23:36:05.947421 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2245324b-aa83-45c9-8ee2-cb6f7a6e8639-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-sf6bl\" (UID: \"2245324b-aa83-45c9-8ee2-cb6f7a6e8639\") " pod="tigera-operator/tigera-operator-6bf85f8dd-sf6bl" Apr 24 23:36:05.947530 kubelet[2603]: I0424 23:36:05.947472 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n7c2\" (UniqueName: \"kubernetes.io/projected/2245324b-aa83-45c9-8ee2-cb6f7a6e8639-kube-api-access-5n7c2\") pod \"tigera-operator-6bf85f8dd-sf6bl\" (UID: \"2245324b-aa83-45c9-8ee2-cb6f7a6e8639\") " pod="tigera-operator/tigera-operator-6bf85f8dd-sf6bl" Apr 24 23:36:05.951872 containerd[1485]: time="2026-04-24T23:36:05.951759104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:05.951872 containerd[1485]: time="2026-04-24T23:36:05.951831902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:05.952391 containerd[1485]: time="2026-04-24T23:36:05.951892861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:05.952391 containerd[1485]: time="2026-04-24T23:36:05.951986179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:05.980802 systemd[1]: Started cri-containerd-3e209e2fc8252c91975684bfa8805a2f3c8937550fdf9e7fa4485c9ebcce899e.scope - libcontainer container 3e209e2fc8252c91975684bfa8805a2f3c8937550fdf9e7fa4485c9ebcce899e. Apr 24 23:36:06.006990 containerd[1485]: time="2026-04-24T23:36:06.006866084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6bjml,Uid:8bf2bfc3-5337-4344-950b-4f0c132f9591,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e209e2fc8252c91975684bfa8805a2f3c8937550fdf9e7fa4485c9ebcce899e\"" Apr 24 23:36:06.013679 containerd[1485]: time="2026-04-24T23:36:06.013641232Z" level=info msg="CreateContainer within sandbox \"3e209e2fc8252c91975684bfa8805a2f3c8937550fdf9e7fa4485c9ebcce899e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 23:36:06.032786 containerd[1485]: time="2026-04-24T23:36:06.032741779Z" level=info msg="CreateContainer within sandbox \"3e209e2fc8252c91975684bfa8805a2f3c8937550fdf9e7fa4485c9ebcce899e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"254edab82ef5993aa67ace7c6327445519d5957fa51c2aa9930745f0ee8023ff\"" Apr 24 23:36:06.033712 containerd[1485]: time="2026-04-24T23:36:06.033690721Z" level=info msg="StartContainer for \"254edab82ef5993aa67ace7c6327445519d5957fa51c2aa9930745f0ee8023ff\"" Apr 24 23:36:06.061412 systemd[1]: Started cri-containerd-254edab82ef5993aa67ace7c6327445519d5957fa51c2aa9930745f0ee8023ff.scope - libcontainer container 254edab82ef5993aa67ace7c6327445519d5957fa51c2aa9930745f0ee8023ff. Apr 24 23:36:06.095422 containerd[1485]: time="2026-04-24T23:36:06.095285559Z" level=info msg="StartContainer for \"254edab82ef5993aa67ace7c6327445519d5957fa51c2aa9930745f0ee8023ff\" returns successfully" Apr 24 23:36:06.200362 containerd[1485]: time="2026-04-24T23:36:06.200022755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-sf6bl,Uid:2245324b-aa83-45c9-8ee2-cb6f7a6e8639,Namespace:tigera-operator,Attempt:0,}" Apr 24 23:36:06.234115 containerd[1485]: time="2026-04-24T23:36:06.233891614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:06.234115 containerd[1485]: time="2026-04-24T23:36:06.233956453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:06.234115 containerd[1485]: time="2026-04-24T23:36:06.233967053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:06.234115 containerd[1485]: time="2026-04-24T23:36:06.234047851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:06.256047 systemd[1]: Started cri-containerd-73d8222418a4b942ffa8bcbd4f6bf8049cd70cfd41f784bbad52fe056035ea93.scope - libcontainer container 73d8222418a4b942ffa8bcbd4f6bf8049cd70cfd41f784bbad52fe056035ea93. Apr 24 23:36:06.303262 containerd[1485]: time="2026-04-24T23:36:06.303191942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-sf6bl,Uid:2245324b-aa83-45c9-8ee2-cb6f7a6e8639,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"73d8222418a4b942ffa8bcbd4f6bf8049cd70cfd41f784bbad52fe056035ea93\"" Apr 24 23:36:06.306752 containerd[1485]: time="2026-04-24T23:36:06.306705474Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 24 23:36:07.126832 kubelet[2603]: I0424 23:36:07.125757 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6bjml" podStartSLOduration=2.125737646 podStartE2EDuration="2.125737646s" podCreationTimestamp="2026-04-24 23:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:07.12495714 +0000 UTC m=+8.221404430" watchObservedRunningTime="2026-04-24 23:36:07.125737646 +0000 UTC m=+8.222184896" Apr 24 23:36:07.711904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80534503.mount: Deactivated successfully. Apr 24 23:36:08.459373 containerd[1485]: time="2026-04-24T23:36:08.458589189Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:08.461097 containerd[1485]: time="2026-04-24T23:36:08.460854310Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 24 23:36:08.462722 containerd[1485]: time="2026-04-24T23:36:08.462220926Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:08.465273 containerd[1485]: time="2026-04-24T23:36:08.465243715Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:08.466401 containerd[1485]: time="2026-04-24T23:36:08.466361535Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.159437666s" Apr 24 23:36:08.466470 containerd[1485]: time="2026-04-24T23:36:08.466401175Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 24 23:36:08.471546 containerd[1485]: time="2026-04-24T23:36:08.471495167Z" level=info msg="CreateContainer within sandbox \"73d8222418a4b942ffa8bcbd4f6bf8049cd70cfd41f784bbad52fe056035ea93\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 23:36:08.489252 containerd[1485]: time="2026-04-24T23:36:08.489186544Z" level=info msg="CreateContainer within sandbox \"73d8222418a4b942ffa8bcbd4f6bf8049cd70cfd41f784bbad52fe056035ea93\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"89596fdff4af72dca6e894fa1190202b19162e5c4385ee2659bae589dbc26499\"" Apr 24 23:36:08.490573 containerd[1485]: time="2026-04-24T23:36:08.490456442Z" level=info msg="StartContainer for \"89596fdff4af72dca6e894fa1190202b19162e5c4385ee2659bae589dbc26499\"" Apr 24 23:36:08.515832 systemd[1]: run-containerd-runc-k8s.io-89596fdff4af72dca6e894fa1190202b19162e5c4385ee2659bae589dbc26499-runc.BEMFYn.mount: Deactivated successfully. Apr 24 23:36:08.523599 systemd[1]: Started cri-containerd-89596fdff4af72dca6e894fa1190202b19162e5c4385ee2659bae589dbc26499.scope - libcontainer container 89596fdff4af72dca6e894fa1190202b19162e5c4385ee2659bae589dbc26499. Apr 24 23:36:08.553356 containerd[1485]: time="2026-04-24T23:36:08.552783213Z" level=info msg="StartContainer for \"89596fdff4af72dca6e894fa1190202b19162e5c4385ee2659bae589dbc26499\" returns successfully" Apr 24 23:36:10.205724 kubelet[2603]: I0424 23:36:10.205440 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-sf6bl" podStartSLOduration=3.043144038 podStartE2EDuration="5.205422251s" podCreationTimestamp="2026-04-24 23:36:05 +0000 UTC" firstStartedPulling="2026-04-24 23:36:06.30537842 +0000 UTC m=+7.401825670" lastFinishedPulling="2026-04-24 23:36:08.467656593 +0000 UTC m=+9.564103883" observedRunningTime="2026-04-24 23:36:09.141022997 +0000 UTC m=+10.237470327" watchObservedRunningTime="2026-04-24 23:36:10.205422251 +0000 UTC m=+11.301869541" Apr 24 23:36:14.903576 sudo[1733]: pam_unix(sudo:session): session closed for user root Apr 24 23:36:14.919824 sshd[1730]: pam_unix(sshd:session): session closed for user core Apr 24 23:36:14.926738 systemd[1]: sshd@6-178.105.26.190:22-50.85.169.122:50850.service: Deactivated successfully. Apr 24 23:36:14.930980 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 23:36:14.932286 systemd[1]: session-7.scope: Consumed 7.674s CPU time, 155.2M memory peak, 0B memory swap peak. Apr 24 23:36:14.935144 systemd-logind[1462]: Session 7 logged out. Waiting for processes to exit. Apr 24 23:36:14.936634 systemd-logind[1462]: Removed session 7. Apr 24 23:36:21.878256 systemd[1]: Created slice kubepods-besteffort-podfc041ad2_c269_4f8d_a2df_a3c7fdaa174f.slice - libcontainer container kubepods-besteffort-podfc041ad2_c269_4f8d_a2df_a3c7fdaa174f.slice. Apr 24 23:36:21.952302 kubelet[2603]: I0424 23:36:21.952248 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctw8\" (UniqueName: \"kubernetes.io/projected/fc041ad2-c269-4f8d-a2df-a3c7fdaa174f-kube-api-access-9ctw8\") pod \"calico-typha-7bb58b94bd-cplvq\" (UID: \"fc041ad2-c269-4f8d-a2df-a3c7fdaa174f\") " pod="calico-system/calico-typha-7bb58b94bd-cplvq" Apr 24 23:36:21.952302 kubelet[2603]: I0424 23:36:21.952303 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc041ad2-c269-4f8d-a2df-a3c7fdaa174f-tigera-ca-bundle\") pod \"calico-typha-7bb58b94bd-cplvq\" (UID: \"fc041ad2-c269-4f8d-a2df-a3c7fdaa174f\") " pod="calico-system/calico-typha-7bb58b94bd-cplvq" Apr 24 23:36:21.952706 kubelet[2603]: I0424 23:36:21.952323 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fc041ad2-c269-4f8d-a2df-a3c7fdaa174f-typha-certs\") pod \"calico-typha-7bb58b94bd-cplvq\" (UID: \"fc041ad2-c269-4f8d-a2df-a3c7fdaa174f\") " pod="calico-system/calico-typha-7bb58b94bd-cplvq" Apr 24 23:36:21.984812 systemd[1]: Created slice kubepods-besteffort-pod42b96ba2_952b_4cde_baf6_b6eb23b36cb8.slice - libcontainer container kubepods-besteffort-pod42b96ba2_952b_4cde_baf6_b6eb23b36cb8.slice. Apr 24 23:36:22.053374 kubelet[2603]: I0424 23:36:22.053184 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-bpffs\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053374 kubelet[2603]: I0424 23:36:22.053298 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-nodeproc\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053555 kubelet[2603]: I0424 23:36:22.053388 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-policysync\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053555 kubelet[2603]: I0424 23:36:22.053422 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-flexvol-driver-host\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053555 kubelet[2603]: I0424 23:36:22.053446 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-lib-modules\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053555 kubelet[2603]: I0424 23:36:22.053485 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-sys-fs\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053555 kubelet[2603]: I0424 23:36:22.053511 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-var-run-calico\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053759 kubelet[2603]: I0424 23:36:22.053544 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-xtables-lock\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053759 kubelet[2603]: I0424 23:36:22.053602 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-cni-bin-dir\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053759 kubelet[2603]: I0424 23:36:22.053641 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-tigera-ca-bundle\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053759 kubelet[2603]: I0424 23:36:22.053689 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-cni-log-dir\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053759 kubelet[2603]: I0424 23:36:22.053738 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-cni-net-dir\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.053883 kubelet[2603]: I0424 23:36:22.053847 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dzqz\" (UniqueName: \"kubernetes.io/projected/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-kube-api-access-6dzqz\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.056206 kubelet[2603]: I0424 23:36:22.054063 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-node-certs\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.056206 kubelet[2603]: I0424 23:36:22.054127 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/42b96ba2-952b-4cde-baf6-b6eb23b36cb8-var-lib-calico\") pod \"calico-node-8xml7\" (UID: \"42b96ba2-952b-4cde-baf6-b6eb23b36cb8\") " pod="calico-system/calico-node-8xml7" Apr 24 23:36:22.110183 kubelet[2603]: E0424 23:36:22.110133 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:22.156537 kubelet[2603]: I0424 23:36:22.155720 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13-kubelet-dir\") pod \"csi-node-driver-56znh\" (UID: \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\") " pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:22.156537 kubelet[2603]: I0424 23:36:22.155795 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13-registration-dir\") pod \"csi-node-driver-56znh\" (UID: \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\") " pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:22.156537 kubelet[2603]: I0424 23:36:22.155814 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13-socket-dir\") pod \"csi-node-driver-56znh\" (UID: \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\") " pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:22.156537 kubelet[2603]: I0424 23:36:22.155829 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4hn\" (UniqueName: \"kubernetes.io/projected/2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13-kube-api-access-jt4hn\") pod \"csi-node-driver-56znh\" (UID: \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\") " pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:22.156537 kubelet[2603]: I0424 23:36:22.155893 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13-varrun\") pod \"csi-node-driver-56znh\" (UID: \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\") " pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:22.157266 kubelet[2603]: E0424 23:36:22.157225 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.157266 kubelet[2603]: W0424 23:36:22.157262 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.157397 kubelet[2603]: E0424 23:36:22.157290 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.157521 kubelet[2603]: E0424 23:36:22.157499 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.157521 kubelet[2603]: W0424 23:36:22.157515 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.157588 kubelet[2603]: E0424 23:36:22.157525 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.157698 kubelet[2603]: E0424 23:36:22.157679 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.157698 kubelet[2603]: W0424 23:36:22.157693 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.157766 kubelet[2603]: E0424 23:36:22.157705 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.160380 kubelet[2603]: E0424 23:36:22.159819 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.160380 kubelet[2603]: W0424 23:36:22.159842 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.160380 kubelet[2603]: E0424 23:36:22.159874 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.160380 kubelet[2603]: E0424 23:36:22.160138 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.160380 kubelet[2603]: W0424 23:36:22.160150 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.160380 kubelet[2603]: E0424 23:36:22.160160 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.160380 kubelet[2603]: E0424 23:36:22.160313 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.160380 kubelet[2603]: W0424 23:36:22.160320 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.160380 kubelet[2603]: E0424 23:36:22.160353 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.162450 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165344 kubelet[2603]: W0424 23:36:22.162467 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.162481 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.162724 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165344 kubelet[2603]: W0424 23:36:22.162732 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.162753 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.162908 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165344 kubelet[2603]: W0424 23:36:22.162916 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.162924 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165344 kubelet[2603]: E0424 23:36:22.163110 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165640 kubelet[2603]: W0424 23:36:22.163120 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165640 kubelet[2603]: E0424 23:36:22.163129 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165640 kubelet[2603]: E0424 23:36:22.163273 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165640 kubelet[2603]: W0424 23:36:22.163280 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165640 kubelet[2603]: E0424 23:36:22.163288 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165640 kubelet[2603]: E0424 23:36:22.163474 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165640 kubelet[2603]: W0424 23:36:22.163485 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165640 kubelet[2603]: E0424 23:36:22.163504 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165640 kubelet[2603]: E0424 23:36:22.163662 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165640 kubelet[2603]: W0424 23:36:22.163669 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165900 kubelet[2603]: E0424 23:36:22.163677 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.165900 kubelet[2603]: E0424 23:36:22.163815 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.165900 kubelet[2603]: W0424 23:36:22.163822 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.165900 kubelet[2603]: E0424 23:36:22.163829 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.166443 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.167842 kubelet[2603]: W0424 23:36:22.166468 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.166481 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.166765 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.167842 kubelet[2603]: W0424 23:36:22.166774 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.166784 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.166952 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.167842 kubelet[2603]: W0424 23:36:22.166960 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.166968 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.167842 kubelet[2603]: E0424 23:36:22.167134 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.168129 kubelet[2603]: W0424 23:36:22.167152 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.168129 kubelet[2603]: E0424 23:36:22.167164 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.168129 kubelet[2603]: E0424 23:36:22.167320 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.168129 kubelet[2603]: W0424 23:36:22.167340 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.168129 kubelet[2603]: E0424 23:36:22.167349 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.168129 kubelet[2603]: E0424 23:36:22.167585 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.168129 kubelet[2603]: W0424 23:36:22.167601 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.168129 kubelet[2603]: E0424 23:36:22.167612 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.168129 kubelet[2603]: E0424 23:36:22.167792 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.168129 kubelet[2603]: W0424 23:36:22.167810 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.172450 kubelet[2603]: E0424 23:36:22.167820 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.172450 kubelet[2603]: E0424 23:36:22.167999 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.172450 kubelet[2603]: W0424 23:36:22.168009 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.172450 kubelet[2603]: E0424 23:36:22.168017 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.186035 containerd[1485]: time="2026-04-24T23:36:22.185610504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bb58b94bd-cplvq,Uid:fc041ad2-c269-4f8d-a2df-a3c7fdaa174f,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:22.216537 kubelet[2603]: E0424 23:36:22.216485 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.216537 kubelet[2603]: W0424 23:36:22.216514 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.216537 kubelet[2603]: E0424 23:36:22.216541 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.217313 containerd[1485]: time="2026-04-24T23:36:22.217122205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:22.217313 containerd[1485]: time="2026-04-24T23:36:22.217213964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:22.217313 containerd[1485]: time="2026-04-24T23:36:22.217266644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:22.218464 containerd[1485]: time="2026-04-24T23:36:22.217765760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:22.236531 systemd[1]: Started cri-containerd-9616d5f6073e862160d12fee1e7cc02ac6d304297af545d755bd239a1a3a2807.scope - libcontainer container 9616d5f6073e862160d12fee1e7cc02ac6d304297af545d755bd239a1a3a2807. Apr 24 23:36:22.257517 kubelet[2603]: E0424 23:36:22.257490 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.257881 kubelet[2603]: W0424 23:36:22.257718 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.257881 kubelet[2603]: E0424 23:36:22.257748 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.258171 kubelet[2603]: E0424 23:36:22.258096 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.258866 kubelet[2603]: W0424 23:36:22.258705 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.258866 kubelet[2603]: E0424 23:36:22.258734 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.259244 kubelet[2603]: E0424 23:36:22.259081 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.259244 kubelet[2603]: W0424 23:36:22.259095 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.259244 kubelet[2603]: E0424 23:36:22.259106 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.259904 kubelet[2603]: E0424 23:36:22.259756 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.259904 kubelet[2603]: W0424 23:36:22.259769 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.259904 kubelet[2603]: E0424 23:36:22.259787 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.260430 kubelet[2603]: E0424 23:36:22.260245 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.260430 kubelet[2603]: W0424 23:36:22.260259 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.260430 kubelet[2603]: E0424 23:36:22.260270 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.261241 kubelet[2603]: E0424 23:36:22.261224 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.261609 kubelet[2603]: W0424 23:36:22.261458 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.261609 kubelet[2603]: E0424 23:36:22.261478 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.262278 kubelet[2603]: E0424 23:36:22.262142 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.262278 kubelet[2603]: W0424 23:36:22.262158 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.262278 kubelet[2603]: E0424 23:36:22.262185 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.262906 kubelet[2603]: E0424 23:36:22.262772 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.262906 kubelet[2603]: W0424 23:36:22.262816 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.262906 kubelet[2603]: E0424 23:36:22.262830 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.263235 kubelet[2603]: E0424 23:36:22.263221 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.263372 kubelet[2603]: W0424 23:36:22.263293 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.263372 kubelet[2603]: E0424 23:36:22.263309 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.263746 kubelet[2603]: E0424 23:36:22.263644 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.263746 kubelet[2603]: W0424 23:36:22.263656 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.263746 kubelet[2603]: E0424 23:36:22.263667 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.264454 kubelet[2603]: E0424 23:36:22.264439 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.264616 kubelet[2603]: W0424 23:36:22.264525 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.264616 kubelet[2603]: E0424 23:36:22.264544 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.265162 kubelet[2603]: E0424 23:36:22.265042 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.265162 kubelet[2603]: W0424 23:36:22.265056 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.265162 kubelet[2603]: E0424 23:36:22.265067 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.265492 kubelet[2603]: E0424 23:36:22.265409 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.265492 kubelet[2603]: W0424 23:36:22.265421 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.265492 kubelet[2603]: E0424 23:36:22.265431 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.265889 kubelet[2603]: E0424 23:36:22.265783 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.265889 kubelet[2603]: W0424 23:36:22.265794 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.265889 kubelet[2603]: E0424 23:36:22.265804 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.266200 kubelet[2603]: E0424 23:36:22.266103 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.266200 kubelet[2603]: W0424 23:36:22.266125 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.266200 kubelet[2603]: E0424 23:36:22.266140 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.266619 kubelet[2603]: E0424 23:36:22.266607 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.266731 kubelet[2603]: W0424 23:36:22.266668 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.266731 kubelet[2603]: E0424 23:36:22.266683 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.267077 kubelet[2603]: E0424 23:36:22.267008 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.267077 kubelet[2603]: W0424 23:36:22.267020 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.267077 kubelet[2603]: E0424 23:36:22.267030 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.267572 kubelet[2603]: E0424 23:36:22.267538 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.267572 kubelet[2603]: W0424 23:36:22.267550 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.267786 kubelet[2603]: E0424 23:36:22.267660 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.269426 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.270526 kubelet[2603]: W0424 23:36:22.269442 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.269455 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.269662 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.270526 kubelet[2603]: W0424 23:36:22.269670 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.269695 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.269884 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.270526 kubelet[2603]: W0424 23:36:22.269893 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.269902 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.270526 kubelet[2603]: E0424 23:36:22.270160 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.270838 kubelet[2603]: W0424 23:36:22.270185 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.270838 kubelet[2603]: E0424 23:36:22.270196 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.270923 kubelet[2603]: E0424 23:36:22.270909 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.271009 kubelet[2603]: W0424 23:36:22.270979 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.271009 kubelet[2603]: E0424 23:36:22.270997 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.271375 kubelet[2603]: E0424 23:36:22.271350 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.272385 kubelet[2603]: W0424 23:36:22.271443 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.272385 kubelet[2603]: E0424 23:36:22.271460 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.272734 kubelet[2603]: E0424 23:36:22.272718 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.273375 kubelet[2603]: W0424 23:36:22.273225 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.273375 kubelet[2603]: E0424 23:36:22.273250 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.275595 containerd[1485]: time="2026-04-24T23:36:22.275559679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bb58b94bd-cplvq,Uid:fc041ad2-c269-4f8d-a2df-a3c7fdaa174f,Namespace:calico-system,Attempt:0,} returns sandbox id \"9616d5f6073e862160d12fee1e7cc02ac6d304297af545d755bd239a1a3a2807\"" Apr 24 23:36:22.279402 containerd[1485]: time="2026-04-24T23:36:22.279360892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 24 23:36:22.290803 kubelet[2603]: E0424 23:36:22.290565 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:22.290803 kubelet[2603]: W0424 23:36:22.290587 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:22.290803 kubelet[2603]: E0424 23:36:22.290614 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:22.291555 containerd[1485]: time="2026-04-24T23:36:22.291225650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8xml7,Uid:42b96ba2-952b-4cde-baf6-b6eb23b36cb8,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:22.320181 containerd[1485]: time="2026-04-24T23:36:22.319931451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:22.320181 containerd[1485]: time="2026-04-24T23:36:22.320027810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:22.320181 containerd[1485]: time="2026-04-24T23:36:22.320057130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:22.321044 containerd[1485]: time="2026-04-24T23:36:22.320987163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:22.337559 systemd[1]: Started cri-containerd-025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59.scope - libcontainer container 025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59. Apr 24 23:36:22.363194 containerd[1485]: time="2026-04-24T23:36:22.363139910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8xml7,Uid:42b96ba2-952b-4cde-baf6-b6eb23b36cb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\"" Apr 24 23:36:23.746148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1953600098.mount: Deactivated successfully. Apr 24 23:36:24.016545 kubelet[2603]: E0424 23:36:24.016487 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:24.202619 containerd[1485]: time="2026-04-24T23:36:24.202531856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:24.206347 containerd[1485]: time="2026-04-24T23:36:24.205300719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 24 23:36:24.206347 containerd[1485]: time="2026-04-24T23:36:24.205805076Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:24.212157 containerd[1485]: time="2026-04-24T23:36:24.212092317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:24.212739 containerd[1485]: time="2026-04-24T23:36:24.212698954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.933294862s" Apr 24 23:36:24.212739 containerd[1485]: time="2026-04-24T23:36:24.212738673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 24 23:36:24.214604 containerd[1485]: time="2026-04-24T23:36:24.214574942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 24 23:36:24.242866 containerd[1485]: time="2026-04-24T23:36:24.242811690Z" level=info msg="CreateContainer within sandbox \"9616d5f6073e862160d12fee1e7cc02ac6d304297af545d755bd239a1a3a2807\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 23:36:24.262725 containerd[1485]: time="2026-04-24T23:36:24.262662569Z" level=info msg="CreateContainer within sandbox \"9616d5f6073e862160d12fee1e7cc02ac6d304297af545d755bd239a1a3a2807\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"460c1bcf75aa2b0041ae0a3005b57147978012a2cc5609fe5f3e86d96a5bfbaf\"" Apr 24 23:36:24.264837 containerd[1485]: time="2026-04-24T23:36:24.263486604Z" level=info msg="StartContainer for \"460c1bcf75aa2b0041ae0a3005b57147978012a2cc5609fe5f3e86d96a5bfbaf\"" Apr 24 23:36:24.300623 systemd[1]: Started cri-containerd-460c1bcf75aa2b0041ae0a3005b57147978012a2cc5609fe5f3e86d96a5bfbaf.scope - libcontainer container 460c1bcf75aa2b0041ae0a3005b57147978012a2cc5609fe5f3e86d96a5bfbaf. Apr 24 23:36:24.341514 containerd[1485]: time="2026-04-24T23:36:24.341441568Z" level=info msg="StartContainer for \"460c1bcf75aa2b0041ae0a3005b57147978012a2cc5609fe5f3e86d96a5bfbaf\" returns successfully" Apr 24 23:36:25.177560 kubelet[2603]: I0424 23:36:25.177506 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bb58b94bd-cplvq" podStartSLOduration=2.240977329 podStartE2EDuration="4.17747525s" podCreationTimestamp="2026-04-24 23:36:21 +0000 UTC" firstStartedPulling="2026-04-24 23:36:22.277670584 +0000 UTC m=+23.374117874" lastFinishedPulling="2026-04-24 23:36:24.214168505 +0000 UTC m=+25.310615795" observedRunningTime="2026-04-24 23:36:25.177175772 +0000 UTC m=+26.273623022" watchObservedRunningTime="2026-04-24 23:36:25.17747525 +0000 UTC m=+26.273922540" Apr 24 23:36:25.258962 kubelet[2603]: E0424 23:36:25.258895 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.258962 kubelet[2603]: W0424 23:36:25.258934 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.259271 kubelet[2603]: E0424 23:36:25.258982 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.259431 kubelet[2603]: E0424 23:36:25.259303 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.259431 kubelet[2603]: W0424 23:36:25.259320 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.259431 kubelet[2603]: E0424 23:36:25.259420 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.259735 kubelet[2603]: E0424 23:36:25.259715 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.259735 kubelet[2603]: W0424 23:36:25.259736 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.259883 kubelet[2603]: E0424 23:36:25.259754 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.260135 kubelet[2603]: E0424 23:36:25.260111 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.260226 kubelet[2603]: W0424 23:36:25.260136 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.260226 kubelet[2603]: E0424 23:36:25.260158 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.260556 kubelet[2603]: E0424 23:36:25.260519 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.260666 kubelet[2603]: W0424 23:36:25.260558 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.260666 kubelet[2603]: E0424 23:36:25.260582 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.261058 kubelet[2603]: E0424 23:36:25.261031 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.261146 kubelet[2603]: W0424 23:36:25.261056 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.261146 kubelet[2603]: E0424 23:36:25.261092 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.261467 kubelet[2603]: E0424 23:36:25.261444 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.261467 kubelet[2603]: W0424 23:36:25.261466 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.261632 kubelet[2603]: E0424 23:36:25.261486 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.261786 kubelet[2603]: E0424 23:36:25.261760 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.261786 kubelet[2603]: W0424 23:36:25.261783 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.261976 kubelet[2603]: E0424 23:36:25.261802 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.262490 kubelet[2603]: E0424 23:36:25.262475 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.262536 kubelet[2603]: W0424 23:36:25.262491 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.262536 kubelet[2603]: E0424 23:36:25.262503 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.262688 kubelet[2603]: E0424 23:36:25.262675 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.262715 kubelet[2603]: W0424 23:36:25.262688 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.262715 kubelet[2603]: E0424 23:36:25.262698 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.262857 kubelet[2603]: E0424 23:36:25.262845 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.262857 kubelet[2603]: W0424 23:36:25.262856 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.262918 kubelet[2603]: E0424 23:36:25.262867 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.263046 kubelet[2603]: E0424 23:36:25.263035 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.263079 kubelet[2603]: W0424 23:36:25.263047 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.263079 kubelet[2603]: E0424 23:36:25.263057 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.263229 kubelet[2603]: E0424 23:36:25.263218 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.263257 kubelet[2603]: W0424 23:36:25.263230 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.263257 kubelet[2603]: E0424 23:36:25.263239 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.263505 kubelet[2603]: E0424 23:36:25.263492 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.263541 kubelet[2603]: W0424 23:36:25.263505 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.263541 kubelet[2603]: E0424 23:36:25.263518 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.263694 kubelet[2603]: E0424 23:36:25.263683 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.263726 kubelet[2603]: W0424 23:36:25.263695 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.263726 kubelet[2603]: E0424 23:36:25.263706 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.284729 kubelet[2603]: E0424 23:36:25.284684 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.284729 kubelet[2603]: W0424 23:36:25.284719 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.284729 kubelet[2603]: E0424 23:36:25.284747 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.285477 kubelet[2603]: E0424 23:36:25.285135 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.285477 kubelet[2603]: W0424 23:36:25.285165 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.285477 kubelet[2603]: E0424 23:36:25.285186 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.286442 kubelet[2603]: E0424 23:36:25.286426 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.286525 kubelet[2603]: W0424 23:36:25.286512 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.286596 kubelet[2603]: E0424 23:36:25.286573 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.286988 kubelet[2603]: E0424 23:36:25.286975 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.287065 kubelet[2603]: W0424 23:36:25.287053 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.287129 kubelet[2603]: E0424 23:36:25.287109 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.287984 kubelet[2603]: E0424 23:36:25.287967 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.288088 kubelet[2603]: W0424 23:36:25.288075 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.288157 kubelet[2603]: E0424 23:36:25.288147 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.288602 kubelet[2603]: E0424 23:36:25.288586 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.288871 kubelet[2603]: W0424 23:36:25.288644 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.288871 kubelet[2603]: E0424 23:36:25.288658 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.289267 kubelet[2603]: E0424 23:36:25.289090 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.289267 kubelet[2603]: W0424 23:36:25.289103 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.289267 kubelet[2603]: E0424 23:36:25.289114 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.289661 kubelet[2603]: E0424 23:36:25.289561 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.289661 kubelet[2603]: W0424 23:36:25.289574 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.289661 kubelet[2603]: E0424 23:36:25.289586 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.290095 kubelet[2603]: E0424 23:36:25.289973 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.290095 kubelet[2603]: W0424 23:36:25.289992 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.290095 kubelet[2603]: E0424 23:36:25.290003 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.290548 kubelet[2603]: E0424 23:36:25.290485 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.290548 kubelet[2603]: W0424 23:36:25.290500 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.290548 kubelet[2603]: E0424 23:36:25.290514 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.291142 kubelet[2603]: E0424 23:36:25.290956 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.291142 kubelet[2603]: W0424 23:36:25.290984 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.291142 kubelet[2603]: E0424 23:36:25.290997 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.291476 kubelet[2603]: E0424 23:36:25.291399 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.291476 kubelet[2603]: W0424 23:36:25.291410 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.291476 kubelet[2603]: E0424 23:36:25.291423 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.292655 kubelet[2603]: E0424 23:36:25.292371 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.292655 kubelet[2603]: W0424 23:36:25.292388 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.292655 kubelet[2603]: E0424 23:36:25.292400 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.293109 kubelet[2603]: E0424 23:36:25.292999 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.293109 kubelet[2603]: W0424 23:36:25.293012 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.293109 kubelet[2603]: E0424 23:36:25.293045 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.293562 kubelet[2603]: E0424 23:36:25.293471 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.293562 kubelet[2603]: W0424 23:36:25.293483 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.293562 kubelet[2603]: E0424 23:36:25.293493 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.293973 kubelet[2603]: E0424 23:36:25.293868 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.293973 kubelet[2603]: W0424 23:36:25.293879 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.293973 kubelet[2603]: E0424 23:36:25.293891 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.294295 kubelet[2603]: E0424 23:36:25.294283 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.294547 kubelet[2603]: W0424 23:36:25.294376 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.294547 kubelet[2603]: E0424 23:36:25.294393 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.294865 kubelet[2603]: E0424 23:36:25.294820 2603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 23:36:25.294865 kubelet[2603]: W0424 23:36:25.294832 2603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 23:36:25.294865 kubelet[2603]: E0424 23:36:25.294842 2603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 23:36:25.611046 containerd[1485]: time="2026-04-24T23:36:25.610967409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:25.613050 containerd[1485]: time="2026-04-24T23:36:25.612991237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 24 23:36:25.614236 containerd[1485]: time="2026-04-24T23:36:25.614170430Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:25.618283 containerd[1485]: time="2026-04-24T23:36:25.617706850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:25.618474 containerd[1485]: time="2026-04-24T23:36:25.618247727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.403503106s" Apr 24 23:36:25.618528 containerd[1485]: time="2026-04-24T23:36:25.618478046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 24 23:36:25.624050 containerd[1485]: time="2026-04-24T23:36:25.624002494Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 23:36:25.641405 containerd[1485]: time="2026-04-24T23:36:25.641357835Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8\"" Apr 24 23:36:25.642909 containerd[1485]: time="2026-04-24T23:36:25.642499028Z" level=info msg="StartContainer for \"0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8\"" Apr 24 23:36:25.681677 systemd[1]: Started cri-containerd-0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8.scope - libcontainer container 0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8. Apr 24 23:36:25.715280 containerd[1485]: time="2026-04-24T23:36:25.715205892Z" level=info msg="StartContainer for \"0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8\" returns successfully" Apr 24 23:36:25.732433 systemd[1]: cri-containerd-0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8.scope: Deactivated successfully. Apr 24 23:36:25.755742 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8-rootfs.mount: Deactivated successfully. Apr 24 23:36:25.843110 containerd[1485]: time="2026-04-24T23:36:25.842800562Z" level=info msg="shim disconnected" id=0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8 namespace=k8s.io Apr 24 23:36:25.843110 containerd[1485]: time="2026-04-24T23:36:25.842874241Z" level=warning msg="cleaning up after shim disconnected" id=0fcde8421a31f45d9bac0392e76a2bf9059f6006802e0d5ade9292cb2defb6f8 namespace=k8s.io Apr 24 23:36:25.843110 containerd[1485]: time="2026-04-24T23:36:25.842890401Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:25.856820 containerd[1485]: time="2026-04-24T23:36:25.856671482Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:36:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:36:26.017948 kubelet[2603]: E0424 23:36:26.015806 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:26.164304 kubelet[2603]: I0424 23:36:26.164249 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:36:26.167391 containerd[1485]: time="2026-04-24T23:36:26.167351404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 24 23:36:28.017763 kubelet[2603]: E0424 23:36:28.017136 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:30.017536 kubelet[2603]: E0424 23:36:30.016725 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:30.846998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795318016.mount: Deactivated successfully. Apr 24 23:36:30.874360 containerd[1485]: time="2026-04-24T23:36:30.872717067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:30.874360 containerd[1485]: time="2026-04-24T23:36:30.873662423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 24 23:36:30.875486 containerd[1485]: time="2026-04-24T23:36:30.875420816Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:30.878009 containerd[1485]: time="2026-04-24T23:36:30.877978845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:30.878793 containerd[1485]: time="2026-04-24T23:36:30.878763922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 4.711368559s" Apr 24 23:36:30.878936 containerd[1485]: time="2026-04-24T23:36:30.878917241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 24 23:36:30.888053 containerd[1485]: time="2026-04-24T23:36:30.887975004Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 23:36:30.907238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4019598272.mount: Deactivated successfully. Apr 24 23:36:30.913221 containerd[1485]: time="2026-04-24T23:36:30.913143499Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0\"" Apr 24 23:36:30.915357 containerd[1485]: time="2026-04-24T23:36:30.914549653Z" level=info msg="StartContainer for \"0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0\"" Apr 24 23:36:30.952680 systemd[1]: Started cri-containerd-0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0.scope - libcontainer container 0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0. Apr 24 23:36:30.990439 containerd[1485]: time="2026-04-24T23:36:30.990398699Z" level=info msg="StartContainer for \"0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0\" returns successfully" Apr 24 23:36:31.087715 systemd[1]: cri-containerd-0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0.scope: Deactivated successfully. Apr 24 23:36:31.249937 containerd[1485]: time="2026-04-24T23:36:31.249731568Z" level=info msg="shim disconnected" id=0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0 namespace=k8s.io Apr 24 23:36:31.250531 containerd[1485]: time="2026-04-24T23:36:31.250042087Z" level=warning msg="cleaning up after shim disconnected" id=0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0 namespace=k8s.io Apr 24 23:36:31.250531 containerd[1485]: time="2026-04-24T23:36:31.250067647Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:31.263803 containerd[1485]: time="2026-04-24T23:36:31.263756994Z" level=warning msg="cleanup warnings time=\"2026-04-24T23:36:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 24 23:36:31.849713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0916d2910159a9e43c057197d711422da149597a2eae6d6da1d477e352d326b0-rootfs.mount: Deactivated successfully. Apr 24 23:36:32.017421 kubelet[2603]: E0424 23:36:32.016717 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:32.197153 containerd[1485]: time="2026-04-24T23:36:32.196643736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 24 23:36:34.015727 kubelet[2603]: E0424 23:36:34.015673 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:34.662489 containerd[1485]: time="2026-04-24T23:36:34.661937593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:34.663457 containerd[1485]: time="2026-04-24T23:36:34.663402188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 24 23:36:34.664735 containerd[1485]: time="2026-04-24T23:36:34.664646504Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:34.668222 containerd[1485]: time="2026-04-24T23:36:34.668131413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:34.669598 containerd[1485]: time="2026-04-24T23:36:34.669211930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.472525994s" Apr 24 23:36:34.669598 containerd[1485]: time="2026-04-24T23:36:34.669253690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 24 23:36:34.674243 containerd[1485]: time="2026-04-24T23:36:34.674187754Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 23:36:34.693943 containerd[1485]: time="2026-04-24T23:36:34.693895531Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a\"" Apr 24 23:36:34.694731 containerd[1485]: time="2026-04-24T23:36:34.694694168Z" level=info msg="StartContainer for \"03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a\"" Apr 24 23:36:34.720877 systemd[1]: run-containerd-runc-k8s.io-03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a-runc.ZZhqxZ.mount: Deactivated successfully. Apr 24 23:36:34.730643 systemd[1]: Started cri-containerd-03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a.scope - libcontainer container 03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a. Apr 24 23:36:34.762604 containerd[1485]: time="2026-04-24T23:36:34.762550151Z" level=info msg="StartContainer for \"03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a\" returns successfully" Apr 24 23:36:35.336901 systemd[1]: cri-containerd-03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a.scope: Deactivated successfully. Apr 24 23:36:35.365185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a-rootfs.mount: Deactivated successfully. Apr 24 23:36:35.428409 kubelet[2603]: I0424 23:36:35.423941 2603 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 23:36:35.444265 containerd[1485]: time="2026-04-24T23:36:35.444169417Z" level=info msg="shim disconnected" id=03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a namespace=k8s.io Apr 24 23:36:35.444265 containerd[1485]: time="2026-04-24T23:36:35.444224217Z" level=warning msg="cleaning up after shim disconnected" id=03b550447a66ba34eef3eb72bc0de28bcb95801bee00af9743e626e78ba2115a namespace=k8s.io Apr 24 23:36:35.444265 containerd[1485]: time="2026-04-24T23:36:35.444232457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 24 23:36:35.492955 systemd[1]: Created slice kubepods-burstable-pod2d26b79f_abbb_49bb_8a77_0b8884d8e07b.slice - libcontainer container kubepods-burstable-pod2d26b79f_abbb_49bb_8a77_0b8884d8e07b.slice. Apr 24 23:36:35.499792 systemd[1]: Created slice kubepods-burstable-pod644bb617_7ecf_48f1_9ddf_e7a4ce31159a.slice - libcontainer container kubepods-burstable-pod644bb617_7ecf_48f1_9ddf_e7a4ce31159a.slice. Apr 24 23:36:35.512039 systemd[1]: Created slice kubepods-besteffort-pod9bdb0b4f_d6e3_4c87_82ef_c529db27e927.slice - libcontainer container kubepods-besteffort-pod9bdb0b4f_d6e3_4c87_82ef_c529db27e927.slice. Apr 24 23:36:35.523484 systemd[1]: Created slice kubepods-besteffort-pod1b9d7692_7e14_40cd_a26e_6b56dd9c7d2d.slice - libcontainer container kubepods-besteffort-pod1b9d7692_7e14_40cd_a26e_6b56dd9c7d2d.slice. Apr 24 23:36:35.533887 systemd[1]: Created slice kubepods-besteffort-podbdd18962_34fd_4ffb_80f3_162e643f9847.slice - libcontainer container kubepods-besteffort-podbdd18962_34fd_4ffb_80f3_162e643f9847.slice. Apr 24 23:36:35.544398 systemd[1]: Created slice kubepods-besteffort-podda8aa797_c626_438e_9db6_18259f8e1bda.slice - libcontainer container kubepods-besteffort-podda8aa797_c626_438e_9db6_18259f8e1bda.slice. Apr 24 23:36:35.554428 systemd[1]: Created slice kubepods-besteffort-poda4307834_33a8_455a_b7eb_35751e1353f1.slice - libcontainer container kubepods-besteffort-poda4307834_33a8_455a_b7eb_35751e1353f1.slice. Apr 24 23:36:35.572120 kubelet[2603]: I0424 23:36:35.571936 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsppb\" (UniqueName: \"kubernetes.io/projected/1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d-kube-api-access-vsppb\") pod \"goldmane-5b85766d88-h97hm\" (UID: \"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d\") " pod="calico-system/goldmane-5b85766d88-h97hm" Apr 24 23:36:35.573259 kubelet[2603]: I0424 23:36:35.573201 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bdb0b4f-d6e3-4c87-82ef-c529db27e927-tigera-ca-bundle\") pod \"calico-kube-controllers-5dfdb6b6fc-lsx28\" (UID: \"9bdb0b4f-d6e3-4c87-82ef-c529db27e927\") " pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" Apr 24 23:36:35.573381 kubelet[2603]: I0424 23:36:35.573285 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da8aa797-c626-438e-9db6-18259f8e1bda-calico-apiserver-certs\") pod \"calico-apiserver-5dfb68d68d-lvlpx\" (UID: \"da8aa797-c626-438e-9db6-18259f8e1bda\") " pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" Apr 24 23:36:35.573414 kubelet[2603]: I0424 23:36:35.573372 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d26b79f-abbb-49bb-8a77-0b8884d8e07b-config-volume\") pod \"coredns-674b8bbfcf-pcfdd\" (UID: \"2d26b79f-abbb-49bb-8a77-0b8884d8e07b\") " pod="kube-system/coredns-674b8bbfcf-pcfdd" Apr 24 23:36:35.573474 kubelet[2603]: I0424 23:36:35.573445 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4fr\" (UniqueName: \"kubernetes.io/projected/644bb617-7ecf-48f1-9ddf-e7a4ce31159a-kube-api-access-2w4fr\") pod \"coredns-674b8bbfcf-f4wxb\" (UID: \"644bb617-7ecf-48f1-9ddf-e7a4ce31159a\") " pod="kube-system/coredns-674b8bbfcf-f4wxb" Apr 24 23:36:35.573512 kubelet[2603]: I0424 23:36:35.573497 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-h97hm\" (UID: \"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d\") " pod="calico-system/goldmane-5b85766d88-h97hm" Apr 24 23:36:35.573567 kubelet[2603]: I0424 23:36:35.573542 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzvjr\" (UniqueName: \"kubernetes.io/projected/bdd18962-34fd-4ffb-80f3-162e643f9847-kube-api-access-wzvjr\") pod \"calico-apiserver-5dfb68d68d-klhv4\" (UID: \"bdd18962-34fd-4ffb-80f3-162e643f9847\") " pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" Apr 24 23:36:35.573621 kubelet[2603]: I0424 23:36:35.573594 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-nginx-config\") pod \"whisker-6d5557b47c-msgpn\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " pod="calico-system/whisker-6d5557b47c-msgpn" Apr 24 23:36:35.573674 kubelet[2603]: I0424 23:36:35.573649 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w677s\" (UniqueName: \"kubernetes.io/projected/9bdb0b4f-d6e3-4c87-82ef-c529db27e927-kube-api-access-w677s\") pod \"calico-kube-controllers-5dfdb6b6fc-lsx28\" (UID: \"9bdb0b4f-d6e3-4c87-82ef-c529db27e927\") " pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" Apr 24 23:36:35.573712 kubelet[2603]: I0424 23:36:35.573698 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwf4t\" (UniqueName: \"kubernetes.io/projected/da8aa797-c626-438e-9db6-18259f8e1bda-kube-api-access-fwf4t\") pod \"calico-apiserver-5dfb68d68d-lvlpx\" (UID: \"da8aa797-c626-438e-9db6-18259f8e1bda\") " pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" Apr 24 23:36:35.573768 kubelet[2603]: I0424 23:36:35.573743 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d-config\") pod \"goldmane-5b85766d88-h97hm\" (UID: \"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d\") " pod="calico-system/goldmane-5b85766d88-h97hm" Apr 24 23:36:35.573818 kubelet[2603]: I0424 23:36:35.573792 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d-goldmane-key-pair\") pod \"goldmane-5b85766d88-h97hm\" (UID: \"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d\") " pod="calico-system/goldmane-5b85766d88-h97hm" Apr 24 23:36:35.573951 kubelet[2603]: I0424 23:36:35.573916 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bdd18962-34fd-4ffb-80f3-162e643f9847-calico-apiserver-certs\") pod \"calico-apiserver-5dfb68d68d-klhv4\" (UID: \"bdd18962-34fd-4ffb-80f3-162e643f9847\") " pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" Apr 24 23:36:35.574008 kubelet[2603]: I0424 23:36:35.573981 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-backend-key-pair\") pod \"whisker-6d5557b47c-msgpn\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " pod="calico-system/whisker-6d5557b47c-msgpn" Apr 24 23:36:35.574049 kubelet[2603]: I0424 23:36:35.574031 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-ca-bundle\") pod \"whisker-6d5557b47c-msgpn\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " pod="calico-system/whisker-6d5557b47c-msgpn" Apr 24 23:36:35.574119 kubelet[2603]: I0424 23:36:35.574092 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/644bb617-7ecf-48f1-9ddf-e7a4ce31159a-config-volume\") pod \"coredns-674b8bbfcf-f4wxb\" (UID: \"644bb617-7ecf-48f1-9ddf-e7a4ce31159a\") " pod="kube-system/coredns-674b8bbfcf-f4wxb" Apr 24 23:36:35.574188 kubelet[2603]: I0424 23:36:35.574144 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7k9d\" (UniqueName: \"kubernetes.io/projected/a4307834-33a8-455a-b7eb-35751e1353f1-kube-api-access-p7k9d\") pod \"whisker-6d5557b47c-msgpn\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " pod="calico-system/whisker-6d5557b47c-msgpn" Apr 24 23:36:35.574225 kubelet[2603]: I0424 23:36:35.574201 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nszgp\" (UniqueName: \"kubernetes.io/projected/2d26b79f-abbb-49bb-8a77-0b8884d8e07b-kube-api-access-nszgp\") pod \"coredns-674b8bbfcf-pcfdd\" (UID: \"2d26b79f-abbb-49bb-8a77-0b8884d8e07b\") " pod="kube-system/coredns-674b8bbfcf-pcfdd" Apr 24 23:36:35.811283 containerd[1485]: time="2026-04-24T23:36:35.810502277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4wxb,Uid:644bb617-7ecf-48f1-9ddf-e7a4ce31159a,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:35.811283 containerd[1485]: time="2026-04-24T23:36:35.810510757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcfdd,Uid:2d26b79f-abbb-49bb-8a77-0b8884d8e07b,Namespace:kube-system,Attempt:0,}" Apr 24 23:36:35.822086 containerd[1485]: time="2026-04-24T23:36:35.822034802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfdb6b6fc-lsx28,Uid:9bdb0b4f-d6e3-4c87-82ef-c529db27e927,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:35.841299 containerd[1485]: time="2026-04-24T23:36:35.841074745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-klhv4,Uid:bdd18962-34fd-4ffb-80f3-162e643f9847,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:35.842358 containerd[1485]: time="2026-04-24T23:36:35.842073542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-h97hm,Uid:1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:35.868202 containerd[1485]: time="2026-04-24T23:36:35.867718625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5557b47c-msgpn,Uid:a4307834-33a8-455a-b7eb-35751e1353f1,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:35.869524 containerd[1485]: time="2026-04-24T23:36:35.869480540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-lvlpx,Uid:da8aa797-c626-438e-9db6-18259f8e1bda,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:36.032603 systemd[1]: Created slice kubepods-besteffort-pod2e6fb3e8_3a5f_4261_a733_c3a9f69c9b13.slice - libcontainer container kubepods-besteffort-pod2e6fb3e8_3a5f_4261_a733_c3a9f69c9b13.slice. Apr 24 23:36:36.039079 containerd[1485]: time="2026-04-24T23:36:36.039030278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56znh,Uid:2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:36.112061 containerd[1485]: time="2026-04-24T23:36:36.111947513Z" level=error msg="Failed to destroy network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.114703 containerd[1485]: time="2026-04-24T23:36:36.114600545Z" level=error msg="Failed to destroy network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.116649 containerd[1485]: time="2026-04-24T23:36:36.116598340Z" level=error msg="encountered an error cleaning up failed sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.116832 containerd[1485]: time="2026-04-24T23:36:36.116774539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4wxb,Uid:644bb617-7ecf-48f1-9ddf-e7a4ce31159a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.117207 kubelet[2603]: E0424 23:36:36.117169 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.117284 kubelet[2603]: E0424 23:36:36.117236 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f4wxb" Apr 24 23:36:36.117284 kubelet[2603]: E0424 23:36:36.117256 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-f4wxb" Apr 24 23:36:36.117284 kubelet[2603]: E0424 23:36:36.117305 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-f4wxb_kube-system(644bb617-7ecf-48f1-9ddf-e7a4ce31159a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-f4wxb_kube-system(644bb617-7ecf-48f1-9ddf-e7a4ce31159a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f4wxb" podUID="644bb617-7ecf-48f1-9ddf-e7a4ce31159a" Apr 24 23:36:36.118978 containerd[1485]: time="2026-04-24T23:36:36.118941173Z" level=error msg="Failed to destroy network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.119694 containerd[1485]: time="2026-04-24T23:36:36.119650851Z" level=error msg="encountered an error cleaning up failed sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.119892 containerd[1485]: time="2026-04-24T23:36:36.119868091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfdb6b6fc-lsx28,Uid:9bdb0b4f-d6e3-4c87-82ef-c529db27e927,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.120422 kubelet[2603]: E0424 23:36:36.120362 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.120484 kubelet[2603]: E0424 23:36:36.120432 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" Apr 24 23:36:36.120484 kubelet[2603]: E0424 23:36:36.120454 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" Apr 24 23:36:36.120484 kubelet[2603]: E0424 23:36:36.120507 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfdb6b6fc-lsx28_calico-system(9bdb0b4f-d6e3-4c87-82ef-c529db27e927)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfdb6b6fc-lsx28_calico-system(9bdb0b4f-d6e3-4c87-82ef-c529db27e927)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" podUID="9bdb0b4f-d6e3-4c87-82ef-c529db27e927" Apr 24 23:36:36.123307 containerd[1485]: time="2026-04-24T23:36:36.120678608Z" level=error msg="encountered an error cleaning up failed sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.123307 containerd[1485]: time="2026-04-24T23:36:36.122956762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-h97hm,Uid:1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.123496 kubelet[2603]: E0424 23:36:36.123154 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.123496 kubelet[2603]: E0424 23:36:36.123205 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-h97hm" Apr 24 23:36:36.123496 kubelet[2603]: E0424 23:36:36.123224 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-h97hm" Apr 24 23:36:36.123587 kubelet[2603]: E0424 23:36:36.123263 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-h97hm_calico-system(1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-h97hm_calico-system(1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-h97hm" podUID="1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d" Apr 24 23:36:36.128557 containerd[1485]: time="2026-04-24T23:36:36.128279947Z" level=error msg="Failed to destroy network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.130417 containerd[1485]: time="2026-04-24T23:36:36.130293341Z" level=error msg="encountered an error cleaning up failed sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.130618 containerd[1485]: time="2026-04-24T23:36:36.130577741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcfdd,Uid:2d26b79f-abbb-49bb-8a77-0b8884d8e07b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.131781 containerd[1485]: time="2026-04-24T23:36:36.131742777Z" level=error msg="Failed to destroy network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.132372 kubelet[2603]: E0424 23:36:36.132221 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.132372 kubelet[2603]: E0424 23:36:36.132272 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pcfdd" Apr 24 23:36:36.132372 kubelet[2603]: E0424 23:36:36.132298 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pcfdd" Apr 24 23:36:36.132493 containerd[1485]: time="2026-04-24T23:36:36.132273216Z" level=error msg="encountered an error cleaning up failed sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.132529 containerd[1485]: time="2026-04-24T23:36:36.132477535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-klhv4,Uid:bdd18962-34fd-4ffb-80f3-162e643f9847,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.132988 kubelet[2603]: E0424 23:36:36.132704 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pcfdd_kube-system(2d26b79f-abbb-49bb-8a77-0b8884d8e07b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pcfdd_kube-system(2d26b79f-abbb-49bb-8a77-0b8884d8e07b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pcfdd" podUID="2d26b79f-abbb-49bb-8a77-0b8884d8e07b" Apr 24 23:36:36.132988 kubelet[2603]: E0424 23:36:36.132901 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.132988 kubelet[2603]: E0424 23:36:36.132931 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" Apr 24 23:36:36.133375 kubelet[2603]: E0424 23:36:36.132948 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" Apr 24 23:36:36.135395 kubelet[2603]: E0424 23:36:36.134580 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dfb68d68d-klhv4_calico-system(bdd18962-34fd-4ffb-80f3-162e643f9847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dfb68d68d-klhv4_calico-system(bdd18962-34fd-4ffb-80f3-162e643f9847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" podUID="bdd18962-34fd-4ffb-80f3-162e643f9847" Apr 24 23:36:36.160364 containerd[1485]: time="2026-04-24T23:36:36.160295057Z" level=error msg="Failed to destroy network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.160652 containerd[1485]: time="2026-04-24T23:36:36.160622096Z" level=error msg="encountered an error cleaning up failed sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.160704 containerd[1485]: time="2026-04-24T23:36:36.160686776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5557b47c-msgpn,Uid:a4307834-33a8-455a-b7eb-35751e1353f1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.160993 kubelet[2603]: E0424 23:36:36.160913 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.161117 kubelet[2603]: E0424 23:36:36.161100 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5557b47c-msgpn" Apr 24 23:36:36.161184 kubelet[2603]: E0424 23:36:36.161170 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5557b47c-msgpn" Apr 24 23:36:36.161318 kubelet[2603]: E0424 23:36:36.161291 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d5557b47c-msgpn_calico-system(a4307834-33a8-455a-b7eb-35751e1353f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d5557b47c-msgpn_calico-system(a4307834-33a8-455a-b7eb-35751e1353f1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5557b47c-msgpn" podUID="a4307834-33a8-455a-b7eb-35751e1353f1" Apr 24 23:36:36.165038 containerd[1485]: time="2026-04-24T23:36:36.164925684Z" level=error msg="Failed to destroy network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.165589 containerd[1485]: time="2026-04-24T23:36:36.165325603Z" level=error msg="encountered an error cleaning up failed sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.165589 containerd[1485]: time="2026-04-24T23:36:36.165401523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-lvlpx,Uid:da8aa797-c626-438e-9db6-18259f8e1bda,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.166021 kubelet[2603]: E0424 23:36:36.165656 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.166021 kubelet[2603]: E0424 23:36:36.165812 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" Apr 24 23:36:36.166021 kubelet[2603]: E0424 23:36:36.165957 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" Apr 24 23:36:36.166144 kubelet[2603]: E0424 23:36:36.166123 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dfb68d68d-lvlpx_calico-system(da8aa797-c626-438e-9db6-18259f8e1bda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dfb68d68d-lvlpx_calico-system(da8aa797-c626-438e-9db6-18259f8e1bda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" podUID="da8aa797-c626-438e-9db6-18259f8e1bda" Apr 24 23:36:36.196313 containerd[1485]: time="2026-04-24T23:36:36.196267276Z" level=error msg="Failed to destroy network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.196646 containerd[1485]: time="2026-04-24T23:36:36.196620115Z" level=error msg="encountered an error cleaning up failed sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.196711 containerd[1485]: time="2026-04-24T23:36:36.196671434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56znh,Uid:2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.196972 kubelet[2603]: E0424 23:36:36.196919 2603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.197022 kubelet[2603]: E0424 23:36:36.196978 2603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:36.197022 kubelet[2603]: E0424 23:36:36.196996 2603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-56znh" Apr 24 23:36:36.197072 kubelet[2603]: E0424 23:36:36.197042 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-56znh_calico-system(2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-56znh_calico-system(2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:36.209360 kubelet[2603]: I0424 23:36:36.209280 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:36.211209 containerd[1485]: time="2026-04-24T23:36:36.210982754Z" level=info msg="StopPodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\"" Apr 24 23:36:36.211209 containerd[1485]: time="2026-04-24T23:36:36.211174154Z" level=info msg="Ensure that sandbox 4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09 in task-service has been cleanup successfully" Apr 24 23:36:36.213076 kubelet[2603]: I0424 23:36:36.212693 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:36.214473 containerd[1485]: time="2026-04-24T23:36:36.214423345Z" level=info msg="StopPodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\"" Apr 24 23:36:36.214644 containerd[1485]: time="2026-04-24T23:36:36.214586344Z" level=info msg="Ensure that sandbox c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5 in task-service has been cleanup successfully" Apr 24 23:36:36.218478 kubelet[2603]: I0424 23:36:36.218372 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:36:36.221234 containerd[1485]: time="2026-04-24T23:36:36.220902686Z" level=info msg="StopPodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\"" Apr 24 23:36:36.221234 containerd[1485]: time="2026-04-24T23:36:36.221080446Z" level=info msg="Ensure that sandbox 2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5 in task-service has been cleanup successfully" Apr 24 23:36:36.221779 kubelet[2603]: I0424 23:36:36.221761 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:36:36.224190 containerd[1485]: time="2026-04-24T23:36:36.224033757Z" level=info msg="StopPodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\"" Apr 24 23:36:36.226701 containerd[1485]: time="2026-04-24T23:36:36.226654070Z" level=info msg="Ensure that sandbox 94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd in task-service has been cleanup successfully" Apr 24 23:36:36.276324 kubelet[2603]: I0424 23:36:36.275111 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:36:36.276962 containerd[1485]: time="2026-04-24T23:36:36.276711489Z" level=info msg="StopPodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\"" Apr 24 23:36:36.278304 containerd[1485]: time="2026-04-24T23:36:36.277628327Z" level=info msg="Ensure that sandbox 8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4 in task-service has been cleanup successfully" Apr 24 23:36:36.294821 containerd[1485]: time="2026-04-24T23:36:36.294701279Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 23:36:36.295296 kubelet[2603]: I0424 23:36:36.295263 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:36:36.298228 containerd[1485]: time="2026-04-24T23:36:36.298163229Z" level=info msg="StopPodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\"" Apr 24 23:36:36.298376 containerd[1485]: time="2026-04-24T23:36:36.298351948Z" level=info msg="Ensure that sandbox b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b in task-service has been cleanup successfully" Apr 24 23:36:36.300833 kubelet[2603]: I0424 23:36:36.300795 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:36.303439 containerd[1485]: time="2026-04-24T23:36:36.303397614Z" level=info msg="StopPodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\"" Apr 24 23:36:36.303562 containerd[1485]: time="2026-04-24T23:36:36.303542494Z" level=info msg="Ensure that sandbox d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806 in task-service has been cleanup successfully" Apr 24 23:36:36.330410 kubelet[2603]: I0424 23:36:36.330373 2603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:36.334484 containerd[1485]: time="2026-04-24T23:36:36.334244687Z" level=error msg="StopPodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" failed" error="failed to destroy network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.334660 kubelet[2603]: E0424 23:36:36.334507 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:36:36.334660 kubelet[2603]: E0424 23:36:36.334560 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5"} Apr 24 23:36:36.334660 kubelet[2603]: E0424 23:36:36.334614 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d26b79f-abbb-49bb-8a77-0b8884d8e07b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.334660 kubelet[2603]: E0424 23:36:36.334636 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d26b79f-abbb-49bb-8a77-0b8884d8e07b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pcfdd" podUID="2d26b79f-abbb-49bb-8a77-0b8884d8e07b" Apr 24 23:36:36.335192 containerd[1485]: time="2026-04-24T23:36:36.335123285Z" level=info msg="StopPodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\"" Apr 24 23:36:36.335411 containerd[1485]: time="2026-04-24T23:36:36.335279724Z" level=info msg="Ensure that sandbox 91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609 in task-service has been cleanup successfully" Apr 24 23:36:36.338733 containerd[1485]: time="2026-04-24T23:36:36.338658795Z" level=error msg="StopPodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" failed" error="failed to destroy network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.339361 kubelet[2603]: E0424 23:36:36.339043 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:36.339361 kubelet[2603]: E0424 23:36:36.339172 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5"} Apr 24 23:36:36.339361 kubelet[2603]: E0424 23:36:36.339210 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bdd18962-34fd-4ffb-80f3-162e643f9847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.339361 kubelet[2603]: E0424 23:36:36.339232 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bdd18962-34fd-4ffb-80f3-162e643f9847\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" podUID="bdd18962-34fd-4ffb-80f3-162e643f9847" Apr 24 23:36:36.353719 containerd[1485]: time="2026-04-24T23:36:36.353469113Z" level=error msg="StopPodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" failed" error="failed to destroy network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.354421 kubelet[2603]: E0424 23:36:36.354204 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:36.354421 kubelet[2603]: E0424 23:36:36.354259 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09"} Apr 24 23:36:36.354421 kubelet[2603]: E0424 23:36:36.354301 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"644bb617-7ecf-48f1-9ddf-e7a4ce31159a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.354421 kubelet[2603]: E0424 23:36:36.354325 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"644bb617-7ecf-48f1-9ddf-e7a4ce31159a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-f4wxb" podUID="644bb617-7ecf-48f1-9ddf-e7a4ce31159a" Apr 24 23:36:36.371366 containerd[1485]: time="2026-04-24T23:36:36.368968710Z" level=info msg="CreateContainer within sandbox \"025654f281157b556b633ca5a0df051a62d66af6f115c76cd13093c93f45da59\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"110dedf61eea6600c2be2db5f78c5a6f63a846a1e05534c10fc55b43a6f57cec\"" Apr 24 23:36:36.373235 containerd[1485]: time="2026-04-24T23:36:36.372961738Z" level=info msg="StartContainer for \"110dedf61eea6600c2be2db5f78c5a6f63a846a1e05534c10fc55b43a6f57cec\"" Apr 24 23:36:36.382154 containerd[1485]: time="2026-04-24T23:36:36.382092633Z" level=error msg="StopPodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" failed" error="failed to destroy network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.382462 kubelet[2603]: E0424 23:36:36.382425 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:36:36.383378 kubelet[2603]: E0424 23:36:36.382567 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd"} Apr 24 23:36:36.383378 kubelet[2603]: E0424 23:36:36.382606 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4307834-33a8-455a-b7eb-35751e1353f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.383378 kubelet[2603]: E0424 23:36:36.382631 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4307834-33a8-455a-b7eb-35751e1353f1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5557b47c-msgpn" podUID="a4307834-33a8-455a-b7eb-35751e1353f1" Apr 24 23:36:36.408190 containerd[1485]: time="2026-04-24T23:36:36.407766600Z" level=error msg="StopPodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" failed" error="failed to destroy network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.408500 kubelet[2603]: E0424 23:36:36.408408 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:36:36.408500 kubelet[2603]: E0424 23:36:36.408456 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4"} Apr 24 23:36:36.408584 kubelet[2603]: E0424 23:36:36.408493 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.408584 kubelet[2603]: E0424 23:36:36.408542 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-56znh" podUID="2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13" Apr 24 23:36:36.423387 containerd[1485]: time="2026-04-24T23:36:36.423339357Z" level=error msg="StopPodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" failed" error="failed to destroy network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.425402 kubelet[2603]: E0424 23:36:36.425173 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:36.425402 kubelet[2603]: E0424 23:36:36.425256 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609"} Apr 24 23:36:36.425402 kubelet[2603]: E0424 23:36:36.425295 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bdb0b4f-d6e3-4c87-82ef-c529db27e927\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.425402 kubelet[2603]: E0424 23:36:36.425318 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bdb0b4f-d6e3-4c87-82ef-c529db27e927\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" podUID="9bdb0b4f-d6e3-4c87-82ef-c529db27e927" Apr 24 23:36:36.426125 containerd[1485]: time="2026-04-24T23:36:36.425934829Z" level=error msg="StopPodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" failed" error="failed to destroy network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.426201 kubelet[2603]: E0424 23:36:36.426143 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:36.426201 kubelet[2603]: E0424 23:36:36.426178 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806"} Apr 24 23:36:36.426257 kubelet[2603]: E0424 23:36:36.426207 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.426257 kubelet[2603]: E0424 23:36:36.426225 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-h97hm" podUID="1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d" Apr 24 23:36:36.427417 containerd[1485]: time="2026-04-24T23:36:36.426929746Z" level=error msg="StopPodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" failed" error="failed to destroy network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 23:36:36.427500 kubelet[2603]: E0424 23:36:36.427273 2603 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:36:36.427500 kubelet[2603]: E0424 23:36:36.427310 2603 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b"} Apr 24 23:36:36.427500 kubelet[2603]: E0424 23:36:36.427363 2603 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da8aa797-c626-438e-9db6-18259f8e1bda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 24 23:36:36.427500 kubelet[2603]: E0424 23:36:36.427385 2603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da8aa797-c626-438e-9db6-18259f8e1bda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" podUID="da8aa797-c626-438e-9db6-18259f8e1bda" Apr 24 23:36:36.433508 systemd[1]: Started cri-containerd-110dedf61eea6600c2be2db5f78c5a6f63a846a1e05534c10fc55b43a6f57cec.scope - libcontainer container 110dedf61eea6600c2be2db5f78c5a6f63a846a1e05534c10fc55b43a6f57cec. Apr 24 23:36:36.465468 containerd[1485]: time="2026-04-24T23:36:36.465408558Z" level=info msg="StartContainer for \"110dedf61eea6600c2be2db5f78c5a6f63a846a1e05534c10fc55b43a6f57cec\" returns successfully" Apr 24 23:36:37.338485 containerd[1485]: time="2026-04-24T23:36:37.338421120Z" level=info msg="StopPodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\"" Apr 24 23:36:37.429051 kubelet[2603]: I0424 23:36:37.427544 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8xml7" podStartSLOduration=4.12313793 podStartE2EDuration="16.427520605s" podCreationTimestamp="2026-04-24 23:36:21 +0000 UTC" firstStartedPulling="2026-04-24 23:36:22.365789252 +0000 UTC m=+23.462236542" lastFinishedPulling="2026-04-24 23:36:34.670171927 +0000 UTC m=+35.766619217" observedRunningTime="2026-04-24 23:36:37.371571753 +0000 UTC m=+38.468019043" watchObservedRunningTime="2026-04-24 23:36:37.427520605 +0000 UTC m=+38.523967935" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.430 [INFO][3847] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.430 [INFO][3847] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" iface="eth0" netns="/var/run/netns/cni-08e7820c-14d4-9a7f-c9d4-52255e0480f6" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.431 [INFO][3847] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" iface="eth0" netns="/var/run/netns/cni-08e7820c-14d4-9a7f-c9d4-52255e0480f6" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.431 [INFO][3847] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" iface="eth0" netns="/var/run/netns/cni-08e7820c-14d4-9a7f-c9d4-52255e0480f6" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.431 [INFO][3847] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.431 [INFO][3847] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.476 [INFO][3873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.476 [INFO][3873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.476 [INFO][3873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.488 [WARNING][3873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.488 [INFO][3873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.492 [INFO][3873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:37.499407 containerd[1485]: 2026-04-24 23:36:37.496 [INFO][3847] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:36:37.501598 containerd[1485]: time="2026-04-24T23:36:37.501554130Z" level=info msg="TearDown network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" successfully" Apr 24 23:36:37.501598 containerd[1485]: time="2026-04-24T23:36:37.501595250Z" level=info msg="StopPodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" returns successfully" Apr 24 23:36:37.504312 systemd[1]: run-netns-cni\x2d08e7820c\x2d14d4\x2d9a7f\x2dc9d4\x2d52255e0480f6.mount: Deactivated successfully. Apr 24 23:36:37.595090 kubelet[2603]: I0424 23:36:37.593756 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-ca-bundle\") pod \"a4307834-33a8-455a-b7eb-35751e1353f1\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " Apr 24 23:36:37.595090 kubelet[2603]: I0424 23:36:37.593819 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7k9d\" (UniqueName: \"kubernetes.io/projected/a4307834-33a8-455a-b7eb-35751e1353f1-kube-api-access-p7k9d\") pod \"a4307834-33a8-455a-b7eb-35751e1353f1\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " Apr 24 23:36:37.595090 kubelet[2603]: I0424 23:36:37.593920 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-backend-key-pair\") pod \"a4307834-33a8-455a-b7eb-35751e1353f1\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " Apr 24 23:36:37.595090 kubelet[2603]: I0424 23:36:37.593964 2603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-nginx-config\") pod \"a4307834-33a8-455a-b7eb-35751e1353f1\" (UID: \"a4307834-33a8-455a-b7eb-35751e1353f1\") " Apr 24 23:36:37.595090 kubelet[2603]: I0424 23:36:37.594625 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a4307834-33a8-455a-b7eb-35751e1353f1" (UID: "a4307834-33a8-455a-b7eb-35751e1353f1"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:36:37.598314 kubelet[2603]: I0424 23:36:37.597714 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a4307834-33a8-455a-b7eb-35751e1353f1" (UID: "a4307834-33a8-455a-b7eb-35751e1353f1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 23:36:37.601674 kubelet[2603]: I0424 23:36:37.601634 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4307834-33a8-455a-b7eb-35751e1353f1-kube-api-access-p7k9d" (OuterVolumeSpecName: "kube-api-access-p7k9d") pod "a4307834-33a8-455a-b7eb-35751e1353f1" (UID: "a4307834-33a8-455a-b7eb-35751e1353f1"). InnerVolumeSpecName "kube-api-access-p7k9d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 23:36:37.602478 kubelet[2603]: I0424 23:36:37.602442 2603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a4307834-33a8-455a-b7eb-35751e1353f1" (UID: "a4307834-33a8-455a-b7eb-35751e1353f1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 23:36:37.602647 systemd[1]: var-lib-kubelet-pods-a4307834\x2d33a8\x2d455a\x2db7eb\x2d35751e1353f1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 23:36:37.606188 systemd[1]: var-lib-kubelet-pods-a4307834\x2d33a8\x2d455a\x2db7eb\x2d35751e1353f1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7k9d.mount: Deactivated successfully. Apr 24 23:36:37.695325 kubelet[2603]: I0424 23:36:37.695209 2603 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-ca-bundle\") on node \"ci-4081-3-6-n-3eeab28b3a\" DevicePath \"\"" Apr 24 23:36:37.695325 kubelet[2603]: I0424 23:36:37.695275 2603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7k9d\" (UniqueName: \"kubernetes.io/projected/a4307834-33a8-455a-b7eb-35751e1353f1-kube-api-access-p7k9d\") on node \"ci-4081-3-6-n-3eeab28b3a\" DevicePath \"\"" Apr 24 23:36:37.695325 kubelet[2603]: I0424 23:36:37.695312 2603 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4307834-33a8-455a-b7eb-35751e1353f1-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-3eeab28b3a\" DevicePath \"\"" Apr 24 23:36:37.695325 kubelet[2603]: I0424 23:36:37.695359 2603 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4307834-33a8-455a-b7eb-35751e1353f1-nginx-config\") on node \"ci-4081-3-6-n-3eeab28b3a\" DevicePath \"\"" Apr 24 23:36:37.799529 kubelet[2603]: I0424 23:36:37.798879 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:36:38.287376 kernel: calico-node[3976]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 24 23:36:38.357000 systemd[1]: Removed slice kubepods-besteffort-poda4307834_33a8_455a_b7eb_35751e1353f1.slice - libcontainer container kubepods-besteffort-poda4307834_33a8_455a_b7eb_35751e1353f1.slice. Apr 24 23:36:38.464213 systemd[1]: Created slice kubepods-besteffort-podb1d20a00_780c_4874_aebf_9eebc16fff1b.slice - libcontainer container kubepods-besteffort-podb1d20a00_780c_4874_aebf_9eebc16fff1b.slice. Apr 24 23:36:38.502977 kubelet[2603]: I0424 23:36:38.502935 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b1d20a00-780c-4874-aebf-9eebc16fff1b-whisker-backend-key-pair\") pod \"whisker-99c87c86b-8cbfh\" (UID: \"b1d20a00-780c-4874-aebf-9eebc16fff1b\") " pod="calico-system/whisker-99c87c86b-8cbfh" Apr 24 23:36:38.502977 kubelet[2603]: I0424 23:36:38.502985 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b1d20a00-780c-4874-aebf-9eebc16fff1b-nginx-config\") pod \"whisker-99c87c86b-8cbfh\" (UID: \"b1d20a00-780c-4874-aebf-9eebc16fff1b\") " pod="calico-system/whisker-99c87c86b-8cbfh" Apr 24 23:36:38.503553 kubelet[2603]: I0424 23:36:38.503004 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1d20a00-780c-4874-aebf-9eebc16fff1b-whisker-ca-bundle\") pod \"whisker-99c87c86b-8cbfh\" (UID: \"b1d20a00-780c-4874-aebf-9eebc16fff1b\") " pod="calico-system/whisker-99c87c86b-8cbfh" Apr 24 23:36:38.503553 kubelet[2603]: I0424 23:36:38.503077 2603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqxc8\" (UniqueName: \"kubernetes.io/projected/b1d20a00-780c-4874-aebf-9eebc16fff1b-kube-api-access-hqxc8\") pod \"whisker-99c87c86b-8cbfh\" (UID: \"b1d20a00-780c-4874-aebf-9eebc16fff1b\") " pod="calico-system/whisker-99c87c86b-8cbfh" Apr 24 23:36:38.768339 containerd[1485]: time="2026-04-24T23:36:38.768277314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-99c87c86b-8cbfh,Uid:b1d20a00-780c-4874-aebf-9eebc16fff1b,Namespace:calico-system,Attempt:0,}" Apr 24 23:36:38.821478 systemd-networkd[1387]: vxlan.calico: Link UP Apr 24 23:36:38.821485 systemd-networkd[1387]: vxlan.calico: Gained carrier Apr 24 23:36:38.980413 systemd-networkd[1387]: cali052a1828325: Link UP Apr 24 23:36:38.980622 systemd-networkd[1387]: cali052a1828325: Gained carrier Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.872 [INFO][4052] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0 whisker-99c87c86b- calico-system b1d20a00-780c-4874-aebf-9eebc16fff1b 903 0 2026-04-24 23:36:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:99c87c86b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a whisker-99c87c86b-8cbfh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali052a1828325 [] [] }} ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.873 [INFO][4052] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.910 [INFO][4086] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" HandleID="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.925 [INFO][4086] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" HandleID="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"whisker-99c87c86b-8cbfh", "timestamp":"2026-04-24 23:36:38.910294483 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004c1080)} Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.925 [INFO][4086] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.925 [INFO][4086] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.925 [INFO][4086] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.928 [INFO][4086] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.935 [INFO][4086] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.941 [INFO][4086] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.944 [INFO][4086] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.946 [INFO][4086] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.946 [INFO][4086] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.949 [INFO][4086] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.957 [INFO][4086] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.964 [INFO][4086] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.1/26] block=192.168.103.0/26 handle="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.964 [INFO][4086] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.1/26] handle="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.964 [INFO][4086] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:39.004479 containerd[1485]: 2026-04-24 23:36:38.964 [INFO][4086] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.1/26] IPv6=[] ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" HandleID="k8s-pod-network.cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.006945 containerd[1485]: 2026-04-24 23:36:38.968 [INFO][4052] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0", GenerateName:"whisker-99c87c86b-", Namespace:"calico-system", SelfLink:"", UID:"b1d20a00-780c-4874-aebf-9eebc16fff1b", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"99c87c86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"whisker-99c87c86b-8cbfh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali052a1828325", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:39.006945 containerd[1485]: 2026-04-24 23:36:38.969 [INFO][4052] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.1/32] ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.006945 containerd[1485]: 2026-04-24 23:36:38.969 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali052a1828325 ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.006945 containerd[1485]: 2026-04-24 23:36:38.981 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.006945 containerd[1485]: 2026-04-24 23:36:38.983 [INFO][4052] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0", GenerateName:"whisker-99c87c86b-", Namespace:"calico-system", SelfLink:"", UID:"b1d20a00-780c-4874-aebf-9eebc16fff1b", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"99c87c86b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca", Pod:"whisker-99c87c86b-8cbfh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.103.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali052a1828325", MAC:"02:c6:4f:2c:8d:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:39.006945 containerd[1485]: 2026-04-24 23:36:39.000 [INFO][4052] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca" Namespace="calico-system" Pod="whisker-99c87c86b-8cbfh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--99c87c86b--8cbfh-eth0" Apr 24 23:36:39.027583 containerd[1485]: time="2026-04-24T23:36:39.025374402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:39.027583 containerd[1485]: time="2026-04-24T23:36:39.025425242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:39.027583 containerd[1485]: time="2026-04-24T23:36:39.025435522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:39.027583 containerd[1485]: time="2026-04-24T23:36:39.025505802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:39.029349 kubelet[2603]: I0424 23:36:39.029146 2603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4307834-33a8-455a-b7eb-35751e1353f1" path="/var/lib/kubelet/pods/a4307834-33a8-455a-b7eb-35751e1353f1/volumes" Apr 24 23:36:39.047517 systemd[1]: Started cri-containerd-cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca.scope - libcontainer container cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca. Apr 24 23:36:39.099137 containerd[1485]: time="2026-04-24T23:36:39.097743554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-99c87c86b-8cbfh,Uid:b1d20a00-780c-4874-aebf-9eebc16fff1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca\"" Apr 24 23:36:39.104693 containerd[1485]: time="2026-04-24T23:36:39.103344781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 24 23:36:40.036536 systemd-networkd[1387]: cali052a1828325: Gained IPv6LL Apr 24 23:36:40.486407 systemd-networkd[1387]: vxlan.calico: Gained IPv6LL Apr 24 23:36:40.598316 containerd[1485]: time="2026-04-24T23:36:40.598230921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:40.599929 containerd[1485]: time="2026-04-24T23:36:40.599786718Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:40.599929 containerd[1485]: time="2026-04-24T23:36:40.599899957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 24 23:36:40.602980 containerd[1485]: time="2026-04-24T23:36:40.602686751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:40.603615 containerd[1485]: time="2026-04-24T23:36:40.603577949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.500190248s" Apr 24 23:36:40.603681 containerd[1485]: time="2026-04-24T23:36:40.603614429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 24 23:36:40.609920 containerd[1485]: time="2026-04-24T23:36:40.609570496Z" level=info msg="CreateContainer within sandbox \"cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 23:36:40.627212 containerd[1485]: time="2026-04-24T23:36:40.627081898Z" level=info msg="CreateContainer within sandbox \"cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4a9ceb920faa54bd5ee6328560cbe207320466dff88744740dcecbfc77368207\"" Apr 24 23:36:40.629187 containerd[1485]: time="2026-04-24T23:36:40.628524135Z" level=info msg="StartContainer for \"4a9ceb920faa54bd5ee6328560cbe207320466dff88744740dcecbfc77368207\"" Apr 24 23:36:40.666041 systemd[1]: run-containerd-runc-k8s.io-4a9ceb920faa54bd5ee6328560cbe207320466dff88744740dcecbfc77368207-runc.KXloqh.mount: Deactivated successfully. Apr 24 23:36:40.676589 systemd[1]: Started cri-containerd-4a9ceb920faa54bd5ee6328560cbe207320466dff88744740dcecbfc77368207.scope - libcontainer container 4a9ceb920faa54bd5ee6328560cbe207320466dff88744740dcecbfc77368207. Apr 24 23:36:40.713969 containerd[1485]: time="2026-04-24T23:36:40.713911629Z" level=info msg="StartContainer for \"4a9ceb920faa54bd5ee6328560cbe207320466dff88744740dcecbfc77368207\" returns successfully" Apr 24 23:36:40.716498 containerd[1485]: time="2026-04-24T23:36:40.716448584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 24 23:36:42.369932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368042372.mount: Deactivated successfully. Apr 24 23:36:42.388049 containerd[1485]: time="2026-04-24T23:36:42.387969468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:42.389917 containerd[1485]: time="2026-04-24T23:36:42.389854384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 24 23:36:42.391113 containerd[1485]: time="2026-04-24T23:36:42.391077662Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:42.394366 containerd[1485]: time="2026-04-24T23:36:42.393788817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:42.395547 containerd[1485]: time="2026-04-24T23:36:42.395497733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.679011109s" Apr 24 23:36:42.395547 containerd[1485]: time="2026-04-24T23:36:42.395544973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 24 23:36:42.402427 containerd[1485]: time="2026-04-24T23:36:42.401553122Z" level=info msg="CreateContainer within sandbox \"cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 23:36:42.422678 containerd[1485]: time="2026-04-24T23:36:42.422618881Z" level=info msg="CreateContainer within sandbox \"cfc704c337d1ddf93455274cdd1a2b7818c1bda409def8defc8b99a546f497ca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"24ad0fdcd6602f4a792ae1922df1f185070edbda76d17215e6a543fa2c96e7b6\"" Apr 24 23:36:42.424971 containerd[1485]: time="2026-04-24T23:36:42.424691157Z" level=info msg="StartContainer for \"24ad0fdcd6602f4a792ae1922df1f185070edbda76d17215e6a543fa2c96e7b6\"" Apr 24 23:36:42.470591 systemd[1]: Started cri-containerd-24ad0fdcd6602f4a792ae1922df1f185070edbda76d17215e6a543fa2c96e7b6.scope - libcontainer container 24ad0fdcd6602f4a792ae1922df1f185070edbda76d17215e6a543fa2c96e7b6. Apr 24 23:36:42.508818 containerd[1485]: time="2026-04-24T23:36:42.508683477Z" level=info msg="StartContainer for \"24ad0fdcd6602f4a792ae1922df1f185070edbda76d17215e6a543fa2c96e7b6\" returns successfully" Apr 24 23:36:47.021407 containerd[1485]: time="2026-04-24T23:36:47.020672468Z" level=info msg="StopPodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\"" Apr 24 23:36:47.084544 kubelet[2603]: I0424 23:36:47.083186 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-99c87c86b-8cbfh" podStartSLOduration=5.788132715 podStartE2EDuration="9.083158581s" podCreationTimestamp="2026-04-24 23:36:38 +0000 UTC" firstStartedPulling="2026-04-24 23:36:39.102079744 +0000 UTC m=+40.198526994" lastFinishedPulling="2026-04-24 23:36:42.39710557 +0000 UTC m=+43.493552860" observedRunningTime="2026-04-24 23:36:43.380987376 +0000 UTC m=+44.477434666" watchObservedRunningTime="2026-04-24 23:36:47.083158581 +0000 UTC m=+48.179605871" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.085 [INFO][4319] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.085 [INFO][4319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" iface="eth0" netns="/var/run/netns/cni-e08b93ab-4365-c7a3-50a2-3df13813f682" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.085 [INFO][4319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" iface="eth0" netns="/var/run/netns/cni-e08b93ab-4365-c7a3-50a2-3df13813f682" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.086 [INFO][4319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" iface="eth0" netns="/var/run/netns/cni-e08b93ab-4365-c7a3-50a2-3df13813f682" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.086 [INFO][4319] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.086 [INFO][4319] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.112 [INFO][4326] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.112 [INFO][4326] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.112 [INFO][4326] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.125 [WARNING][4326] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.126 [INFO][4326] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.128 [INFO][4326] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:47.135397 containerd[1485]: 2026-04-24 23:36:47.132 [INFO][4319] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:47.136018 containerd[1485]: time="2026-04-24T23:36:47.135897148Z" level=info msg="TearDown network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" successfully" Apr 24 23:36:47.136018 containerd[1485]: time="2026-04-24T23:36:47.135931308Z" level=info msg="StopPodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" returns successfully" Apr 24 23:36:47.136944 containerd[1485]: time="2026-04-24T23:36:47.136848507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-h97hm,Uid:1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d,Namespace:calico-system,Attempt:1,}" Apr 24 23:36:47.138883 systemd[1]: run-netns-cni\x2de08b93ab\x2d4365\x2dc7a3\x2d50a2\x2d3df13813f682.mount: Deactivated successfully. Apr 24 23:36:47.332142 systemd-networkd[1387]: calia08fdbb1449: Link UP Apr 24 23:36:47.334462 systemd-networkd[1387]: calia08fdbb1449: Gained carrier Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.203 [INFO][4335] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0 goldmane-5b85766d88- calico-system 1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d 940 0 2026-04-24 23:36:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a goldmane-5b85766d88-h97hm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia08fdbb1449 [] [] }} ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.203 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.255 [INFO][4347] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" HandleID="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.273 [INFO][4347] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" HandleID="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"goldmane-5b85766d88-h97hm", "timestamp":"2026-04-24 23:36:47.255636302 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.273 [INFO][4347] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.273 [INFO][4347] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.273 [INFO][4347] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.276 [INFO][4347] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.284 [INFO][4347] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.300 [INFO][4347] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.303 [INFO][4347] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.306 [INFO][4347] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.306 [INFO][4347] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.312 [INFO][4347] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408 Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.318 [INFO][4347] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.325 [INFO][4347] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.2/26] block=192.168.103.0/26 handle="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.325 [INFO][4347] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.2/26] handle="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.325 [INFO][4347] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:47.355739 containerd[1485]: 2026-04-24 23:36:47.325 [INFO][4347] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.2/26] IPv6=[] ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" HandleID="k8s-pod-network.6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.356277 containerd[1485]: 2026-04-24 23:36:47.328 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"goldmane-5b85766d88-h97hm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia08fdbb1449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:47.356277 containerd[1485]: 2026-04-24 23:36:47.328 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.2/32] ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.356277 containerd[1485]: 2026-04-24 23:36:47.328 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia08fdbb1449 ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.356277 containerd[1485]: 2026-04-24 23:36:47.333 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.356277 containerd[1485]: 2026-04-24 23:36:47.334 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408", Pod:"goldmane-5b85766d88-h97hm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia08fdbb1449", MAC:"0e:d1:3b:be:76:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:47.356277 containerd[1485]: 2026-04-24 23:36:47.353 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408" Namespace="calico-system" Pod="goldmane-5b85766d88-h97hm" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:47.386006 containerd[1485]: time="2026-04-24T23:36:47.385614163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:47.386006 containerd[1485]: time="2026-04-24T23:36:47.385867242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:47.386006 containerd[1485]: time="2026-04-24T23:36:47.385886322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:47.386426 containerd[1485]: time="2026-04-24T23:36:47.385981682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:47.425564 systemd[1]: Started cri-containerd-6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408.scope - libcontainer container 6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408. Apr 24 23:36:47.490194 containerd[1485]: time="2026-04-24T23:36:47.490145418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-h97hm,Uid:1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d,Namespace:calico-system,Attempt:1,} returns sandbox id \"6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408\"" Apr 24 23:36:47.495151 containerd[1485]: time="2026-04-24T23:36:47.493732173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 24 23:36:48.018446 containerd[1485]: time="2026-04-24T23:36:48.017377650Z" level=info msg="StopPodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\"" Apr 24 23:36:48.019838 containerd[1485]: time="2026-04-24T23:36:48.019554367Z" level=info msg="StopPodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\"" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.096 [INFO][4436] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.096 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" iface="eth0" netns="/var/run/netns/cni-e4572b57-96b0-b860-8d0e-cbf3b7553fcb" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.096 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" iface="eth0" netns="/var/run/netns/cni-e4572b57-96b0-b860-8d0e-cbf3b7553fcb" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.097 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" iface="eth0" netns="/var/run/netns/cni-e4572b57-96b0-b860-8d0e-cbf3b7553fcb" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.097 [INFO][4436] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.097 [INFO][4436] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.124 [INFO][4448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.125 [INFO][4448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.125 [INFO][4448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.136 [WARNING][4448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.136 [INFO][4448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.138 [INFO][4448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:48.143583 containerd[1485]: 2026-04-24 23:36:48.141 [INFO][4436] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:48.147513 systemd[1]: run-netns-cni\x2de4572b57\x2d96b0\x2db860\x2d8d0e\x2dcbf3b7553fcb.mount: Deactivated successfully. Apr 24 23:36:48.148105 containerd[1485]: time="2026-04-24T23:36:48.148061920Z" level=info msg="TearDown network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" successfully" Apr 24 23:36:48.148105 containerd[1485]: time="2026-04-24T23:36:48.148100280Z" level=info msg="StopPodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" returns successfully" Apr 24 23:36:48.149608 containerd[1485]: time="2026-04-24T23:36:48.149559198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfdb6b6fc-lsx28,Uid:9bdb0b4f-d6e3-4c87-82ef-c529db27e927,Namespace:calico-system,Attempt:1,}" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.097 [INFO][4432] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.097 [INFO][4432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" iface="eth0" netns="/var/run/netns/cni-df24bd29-44f7-e7e1-9d29-b2d71f0649c2" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.099 [INFO][4432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" iface="eth0" netns="/var/run/netns/cni-df24bd29-44f7-e7e1-9d29-b2d71f0649c2" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.101 [INFO][4432] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" iface="eth0" netns="/var/run/netns/cni-df24bd29-44f7-e7e1-9d29-b2d71f0649c2" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.101 [INFO][4432] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.102 [INFO][4432] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.130 [INFO][4453] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.130 [INFO][4453] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.140 [INFO][4453] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.155 [WARNING][4453] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.155 [INFO][4453] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.158 [INFO][4453] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:48.176898 containerd[1485]: 2026-04-24 23:36:48.162 [INFO][4432] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:48.177893 containerd[1485]: time="2026-04-24T23:36:48.177546722Z" level=info msg="TearDown network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" successfully" Apr 24 23:36:48.177893 containerd[1485]: time="2026-04-24T23:36:48.177576442Z" level=info msg="StopPodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" returns successfully" Apr 24 23:36:48.182117 containerd[1485]: time="2026-04-24T23:36:48.180845278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4wxb,Uid:644bb617-7ecf-48f1-9ddf-e7a4ce31159a,Namespace:kube-system,Attempt:1,}" Apr 24 23:36:48.180995 systemd[1]: run-netns-cni\x2ddf24bd29\x2d44f7\x2de7e1\x2d9d29\x2db2d71f0649c2.mount: Deactivated successfully. Apr 24 23:36:48.344540 systemd-networkd[1387]: calif44c8ca205a: Link UP Apr 24 23:36:48.346976 systemd-networkd[1387]: calif44c8ca205a: Gained carrier Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.233 [INFO][4461] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0 calico-kube-controllers-5dfdb6b6fc- calico-system 9bdb0b4f-d6e3-4c87-82ef-c529db27e927 953 0 2026-04-24 23:36:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dfdb6b6fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a calico-kube-controllers-5dfdb6b6fc-lsx28 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif44c8ca205a [] [] }} ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.233 [INFO][4461] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.274 [INFO][4484] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" HandleID="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.292 [INFO][4484] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" HandleID="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb300), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"calico-kube-controllers-5dfdb6b6fc-lsx28", "timestamp":"2026-04-24 23:36:48.274606436 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000580f20)} Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.292 [INFO][4484] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.292 [INFO][4484] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.292 [INFO][4484] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.296 [INFO][4484] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.308 [INFO][4484] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.315 [INFO][4484] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.317 [INFO][4484] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.319 [INFO][4484] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.319 [INFO][4484] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.321 [INFO][4484] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13 Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.328 [INFO][4484] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.335 [INFO][4484] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.3/26] block=192.168.103.0/26 handle="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.335 [INFO][4484] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.3/26] handle="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.335 [INFO][4484] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:48.367740 containerd[1485]: 2026-04-24 23:36:48.335 [INFO][4484] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.3/26] IPv6=[] ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" HandleID="k8s-pod-network.3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.369302 containerd[1485]: 2026-04-24 23:36:48.339 [INFO][4461] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0", GenerateName:"calico-kube-controllers-5dfdb6b6fc-", Namespace:"calico-system", SelfLink:"", UID:"9bdb0b4f-d6e3-4c87-82ef-c529db27e927", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfdb6b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"calico-kube-controllers-5dfdb6b6fc-lsx28", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif44c8ca205a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:48.369302 containerd[1485]: 2026-04-24 23:36:48.339 [INFO][4461] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.3/32] ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.369302 containerd[1485]: 2026-04-24 23:36:48.339 [INFO][4461] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif44c8ca205a ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.369302 containerd[1485]: 2026-04-24 23:36:48.347 [INFO][4461] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.369302 containerd[1485]: 2026-04-24 23:36:48.348 [INFO][4461] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0", GenerateName:"calico-kube-controllers-5dfdb6b6fc-", Namespace:"calico-system", SelfLink:"", UID:"9bdb0b4f-d6e3-4c87-82ef-c529db27e927", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfdb6b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13", Pod:"calico-kube-controllers-5dfdb6b6fc-lsx28", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif44c8ca205a", MAC:"de:98:6c:23:24:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:48.369302 containerd[1485]: 2026-04-24 23:36:48.365 [INFO][4461] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13" Namespace="calico-system" Pod="calico-kube-controllers-5dfdb6b6fc-lsx28" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:48.400388 containerd[1485]: time="2026-04-24T23:36:48.400265233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:48.401036 containerd[1485]: time="2026-04-24T23:36:48.400857112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:48.401036 containerd[1485]: time="2026-04-24T23:36:48.400883592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.401036 containerd[1485]: time="2026-04-24T23:36:48.400997432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.428407 systemd[1]: Started cri-containerd-3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13.scope - libcontainer container 3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13. Apr 24 23:36:48.461276 systemd-networkd[1387]: calibdc7cb8f8ff: Link UP Apr 24 23:36:48.462539 systemd-networkd[1387]: calibdc7cb8f8ff: Gained carrier Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.258 [INFO][4471] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0 coredns-674b8bbfcf- kube-system 644bb617-7ecf-48f1-9ddf-e7a4ce31159a 952 0 2026-04-24 23:36:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a coredns-674b8bbfcf-f4wxb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibdc7cb8f8ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.258 [INFO][4471] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.299 [INFO][4489] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" HandleID="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.311 [INFO][4489] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" HandleID="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"coredns-674b8bbfcf-f4wxb", "timestamp":"2026-04-24 23:36:48.299269644 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002a6000)} Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.311 [INFO][4489] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.335 [INFO][4489] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.336 [INFO][4489] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.397 [INFO][4489] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.408 [INFO][4489] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.421 [INFO][4489] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.428 [INFO][4489] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.434 [INFO][4489] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.434 [INFO][4489] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.437 [INFO][4489] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0 Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.442 [INFO][4489] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.453 [INFO][4489] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.4/26] block=192.168.103.0/26 handle="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.453 [INFO][4489] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.4/26] handle="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.453 [INFO][4489] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:48.492123 containerd[1485]: 2026-04-24 23:36:48.453 [INFO][4489] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.4/26] IPv6=[] ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" HandleID="k8s-pod-network.674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.492732 containerd[1485]: 2026-04-24 23:36:48.456 [INFO][4471] cni-plugin/k8s.go 418: Populated endpoint ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"644bb617-7ecf-48f1-9ddf-e7a4ce31159a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"coredns-674b8bbfcf-f4wxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdc7cb8f8ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:48.492732 containerd[1485]: 2026-04-24 23:36:48.457 [INFO][4471] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.4/32] ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.492732 containerd[1485]: 2026-04-24 23:36:48.457 [INFO][4471] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdc7cb8f8ff ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.492732 containerd[1485]: 2026-04-24 23:36:48.463 [INFO][4471] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.492732 containerd[1485]: 2026-04-24 23:36:48.464 [INFO][4471] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"644bb617-7ecf-48f1-9ddf-e7a4ce31159a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0", Pod:"coredns-674b8bbfcf-f4wxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdc7cb8f8ff", MAC:"9a:e7:75:74:e5:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:48.492732 containerd[1485]: 2026-04-24 23:36:48.482 [INFO][4471] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-f4wxb" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:48.526071 containerd[1485]: time="2026-04-24T23:36:48.525067831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfdb6b6fc-lsx28,Uid:9bdb0b4f-d6e3-4c87-82ef-c529db27e927,Namespace:calico-system,Attempt:1,} returns sandbox id \"3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13\"" Apr 24 23:36:48.538731 containerd[1485]: time="2026-04-24T23:36:48.538633373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:48.539043 containerd[1485]: time="2026-04-24T23:36:48.538907653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:48.539043 containerd[1485]: time="2026-04-24T23:36:48.538945573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.539257 containerd[1485]: time="2026-04-24T23:36:48.539171333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:48.557566 systemd[1]: Started cri-containerd-674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0.scope - libcontainer container 674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0. Apr 24 23:36:48.596954 containerd[1485]: time="2026-04-24T23:36:48.596776738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4wxb,Uid:644bb617-7ecf-48f1-9ddf-e7a4ce31159a,Namespace:kube-system,Attempt:1,} returns sandbox id \"674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0\"" Apr 24 23:36:48.605502 containerd[1485]: time="2026-04-24T23:36:48.605288327Z" level=info msg="CreateContainer within sandbox \"674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:36:48.647075 containerd[1485]: time="2026-04-24T23:36:48.647032393Z" level=info msg="CreateContainer within sandbox \"674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf1924196784e4e12368a128153a7042b1aaba5740348224ab87181fec32f475\"" Apr 24 23:36:48.651684 containerd[1485]: time="2026-04-24T23:36:48.651613107Z" level=info msg="StartContainer for \"bf1924196784e4e12368a128153a7042b1aaba5740348224ab87181fec32f475\"" Apr 24 23:36:48.678577 systemd[1]: Started cri-containerd-bf1924196784e4e12368a128153a7042b1aaba5740348224ab87181fec32f475.scope - libcontainer container bf1924196784e4e12368a128153a7042b1aaba5740348224ab87181fec32f475. Apr 24 23:36:48.705886 containerd[1485]: time="2026-04-24T23:36:48.705824117Z" level=info msg="StartContainer for \"bf1924196784e4e12368a128153a7042b1aaba5740348224ab87181fec32f475\" returns successfully" Apr 24 23:36:49.021352 containerd[1485]: time="2026-04-24T23:36:49.018356673Z" level=info msg="StopPodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\"" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.101 [INFO][4666] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.101 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" iface="eth0" netns="/var/run/netns/cni-9d672ba2-a641-da5f-d06b-51f7217099de" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.102 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" iface="eth0" netns="/var/run/netns/cni-9d672ba2-a641-da5f-d06b-51f7217099de" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.104 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" iface="eth0" netns="/var/run/netns/cni-9d672ba2-a641-da5f-d06b-51f7217099de" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.104 [INFO][4666] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.104 [INFO][4666] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.167 [INFO][4678] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.167 [INFO][4678] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.168 [INFO][4678] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.183 [WARNING][4678] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.183 [INFO][4678] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.187 [INFO][4678] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:49.198930 containerd[1485]: 2026-04-24 23:36:49.193 [INFO][4666] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:36:49.203721 containerd[1485]: time="2026-04-24T23:36:49.203662727Z" level=info msg="TearDown network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" successfully" Apr 24 23:36:49.203721 containerd[1485]: time="2026-04-24T23:36:49.203707367Z" level=info msg="StopPodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" returns successfully" Apr 24 23:36:49.204602 containerd[1485]: time="2026-04-24T23:36:49.204570006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56znh,Uid:2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13,Namespace:calico-system,Attempt:1,}" Apr 24 23:36:49.206800 systemd[1]: run-netns-cni\x2d9d672ba2\x2da641\x2dda5f\x2dd06b\x2d51f7217099de.mount: Deactivated successfully. Apr 24 23:36:49.252931 systemd-networkd[1387]: calia08fdbb1449: Gained IPv6LL Apr 24 23:36:49.458713 kubelet[2603]: I0424 23:36:49.458434 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f4wxb" podStartSLOduration=44.457284019 podStartE2EDuration="44.457284019s" podCreationTimestamp="2026-04-24 23:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:49.422242541 +0000 UTC m=+50.518689791" watchObservedRunningTime="2026-04-24 23:36:49.457284019 +0000 UTC m=+50.553731269" Apr 24 23:36:49.507990 systemd-networkd[1387]: cali68b6235bfa4: Link UP Apr 24 23:36:49.508990 systemd-networkd[1387]: cali68b6235bfa4: Gained carrier Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.324 [INFO][4695] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0 csi-node-driver- calico-system 2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13 967 0 2026-04-24 23:36:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a csi-node-driver-56znh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali68b6235bfa4 [] [] }} ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.324 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.376 [INFO][4711] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" HandleID="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.396 [INFO][4711] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" HandleID="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"csi-node-driver-56znh", "timestamp":"2026-04-24 23:36:49.376082157 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010e2c0)} Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.396 [INFO][4711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.396 [INFO][4711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.396 [INFO][4711] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.401 [INFO][4711] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.423 [INFO][4711] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.441 [INFO][4711] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.445 [INFO][4711] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.453 [INFO][4711] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.454 [INFO][4711] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.462 [INFO][4711] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459 Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.476 [INFO][4711] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.493 [INFO][4711] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.5/26] block=192.168.103.0/26 handle="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.494 [INFO][4711] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.5/26] handle="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.494 [INFO][4711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:49.548279 containerd[1485]: 2026-04-24 23:36:49.494 [INFO][4711] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.5/26] IPv6=[] ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" HandleID="k8s-pod-network.28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.549301 containerd[1485]: 2026-04-24 23:36:49.499 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"csi-node-driver-56znh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68b6235bfa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:49.549301 containerd[1485]: 2026-04-24 23:36:49.500 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.5/32] ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.549301 containerd[1485]: 2026-04-24 23:36:49.501 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68b6235bfa4 ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.549301 containerd[1485]: 2026-04-24 23:36:49.509 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.549301 containerd[1485]: 2026-04-24 23:36:49.513 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459", Pod:"csi-node-driver-56znh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68b6235bfa4", MAC:"be:2a:18:48:d5:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:49.549301 containerd[1485]: 2026-04-24 23:36:49.541 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459" Namespace="calico-system" Pod="csi-node-driver-56znh" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:36:49.611362 containerd[1485]: time="2026-04-24T23:36:49.610032913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:49.611362 containerd[1485]: time="2026-04-24T23:36:49.610108873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:49.611362 containerd[1485]: time="2026-04-24T23:36:49.610121113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:49.611362 containerd[1485]: time="2026-04-24T23:36:49.610227073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:49.650913 systemd[1]: Started cri-containerd-28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459.scope - libcontainer container 28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459. Apr 24 23:36:49.695227 containerd[1485]: time="2026-04-24T23:36:49.695184849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-56znh,Uid:2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13,Namespace:calico-system,Attempt:1,} returns sandbox id \"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459\"" Apr 24 23:36:50.018560 containerd[1485]: time="2026-04-24T23:36:50.018251578Z" level=info msg="StopPodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\"" Apr 24 23:36:50.020233 containerd[1485]: time="2026-04-24T23:36:50.020202815Z" level=info msg="StopPodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\"" Apr 24 23:36:50.021546 systemd-networkd[1387]: calif44c8ca205a: Gained IPv6LL Apr 24 23:36:50.148251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091174069.mount: Deactivated successfully. Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.140 [INFO][4803] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.140 [INFO][4803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" iface="eth0" netns="/var/run/netns/cni-9259f52f-4881-f270-3aaa-f1b1cf72f3b8" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.142 [INFO][4803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" iface="eth0" netns="/var/run/netns/cni-9259f52f-4881-f270-3aaa-f1b1cf72f3b8" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.143 [INFO][4803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" iface="eth0" netns="/var/run/netns/cni-9259f52f-4881-f270-3aaa-f1b1cf72f3b8" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.144 [INFO][4803] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.144 [INFO][4803] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.203 [INFO][4822] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.203 [INFO][4822] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.203 [INFO][4822] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.226 [WARNING][4822] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.227 [INFO][4822] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.230 [INFO][4822] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:50.236727 containerd[1485]: 2026-04-24 23:36:50.232 [INFO][4803] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:36:50.239108 containerd[1485]: time="2026-04-24T23:36:50.238581046Z" level=info msg="TearDown network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" successfully" Apr 24 23:36:50.239108 containerd[1485]: time="2026-04-24T23:36:50.238797366Z" level=info msg="StopPodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" returns successfully" Apr 24 23:36:50.240516 containerd[1485]: time="2026-04-24T23:36:50.240411724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcfdd,Uid:2d26b79f-abbb-49bb-8a77-0b8884d8e07b,Namespace:kube-system,Attempt:1,}" Apr 24 23:36:50.242428 systemd[1]: run-netns-cni\x2d9259f52f\x2d4881\x2df270\x2d3aaa\x2df1b1cf72f3b8.mount: Deactivated successfully. Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.142 [INFO][4812] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.145 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" iface="eth0" netns="/var/run/netns/cni-75a8ccb3-d0f0-f940-e647-df9a99c18d75" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.148 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" iface="eth0" netns="/var/run/netns/cni-75a8ccb3-d0f0-f940-e647-df9a99c18d75" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.150 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" iface="eth0" netns="/var/run/netns/cni-75a8ccb3-d0f0-f940-e647-df9a99c18d75" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.150 [INFO][4812] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.150 [INFO][4812] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.211 [INFO][4824] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.211 [INFO][4824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.229 [INFO][4824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.249 [WARNING][4824] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.249 [INFO][4824] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.252 [INFO][4824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:50.264830 containerd[1485]: 2026-04-24 23:36:50.257 [INFO][4812] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:50.266393 containerd[1485]: time="2026-04-24T23:36:50.265178856Z" level=info msg="TearDown network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" successfully" Apr 24 23:36:50.266393 containerd[1485]: time="2026-04-24T23:36:50.265219336Z" level=info msg="StopPodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" returns successfully" Apr 24 23:36:50.268279 containerd[1485]: time="2026-04-24T23:36:50.266962334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-klhv4,Uid:bdd18962-34fd-4ffb-80f3-162e643f9847,Namespace:calico-system,Attempt:1,}" Apr 24 23:36:50.276874 systemd[1]: run-netns-cni\x2d75a8ccb3\x2dd0f0\x2df940\x2de647\x2ddf9a99c18d75.mount: Deactivated successfully. Apr 24 23:36:50.319029 containerd[1485]: time="2026-04-24T23:36:50.318954795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:50.320561 containerd[1485]: time="2026-04-24T23:36:50.320500753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 24 23:36:50.323211 containerd[1485]: time="2026-04-24T23:36:50.323155630Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:50.330072 containerd[1485]: time="2026-04-24T23:36:50.329981102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:50.332342 containerd[1485]: time="2026-04-24T23:36:50.332179900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.838365007s" Apr 24 23:36:50.333435 containerd[1485]: time="2026-04-24T23:36:50.333399778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 24 23:36:50.339074 containerd[1485]: time="2026-04-24T23:36:50.339015332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 24 23:36:50.341764 containerd[1485]: time="2026-04-24T23:36:50.341393489Z" level=info msg="CreateContainer within sandbox \"6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 23:36:50.376653 containerd[1485]: time="2026-04-24T23:36:50.376260529Z" level=info msg="CreateContainer within sandbox \"6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b961192389d69b6101091ed77fcc45f3c2a95f48dfc976f3d089f4e1ff93f746\"" Apr 24 23:36:50.378643 containerd[1485]: time="2026-04-24T23:36:50.378585127Z" level=info msg="StartContainer for \"b961192389d69b6101091ed77fcc45f3c2a95f48dfc976f3d089f4e1ff93f746\"" Apr 24 23:36:50.405020 systemd-networkd[1387]: calibdc7cb8f8ff: Gained IPv6LL Apr 24 23:36:50.450152 systemd[1]: Started cri-containerd-b961192389d69b6101091ed77fcc45f3c2a95f48dfc976f3d089f4e1ff93f746.scope - libcontainer container b961192389d69b6101091ed77fcc45f3c2a95f48dfc976f3d089f4e1ff93f746. Apr 24 23:36:50.504269 systemd-networkd[1387]: cali939d59d0d44: Link UP Apr 24 23:36:50.510759 systemd-networkd[1387]: cali939d59d0d44: Gained carrier Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.316 [INFO][4835] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0 coredns-674b8bbfcf- kube-system 2d26b79f-abbb-49bb-8a77-0b8884d8e07b 985 0 2026-04-24 23:36:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a coredns-674b8bbfcf-pcfdd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali939d59d0d44 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.316 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.373 [INFO][4862] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" HandleID="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.395 [INFO][4862] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" HandleID="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"coredns-674b8bbfcf-pcfdd", "timestamp":"2026-04-24 23:36:50.373876732 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400014edc0)} Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.395 [INFO][4862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.395 [INFO][4862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.395 [INFO][4862] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.400 [INFO][4862] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.424 [INFO][4862] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.439 [INFO][4862] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.443 [INFO][4862] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.448 [INFO][4862] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.448 [INFO][4862] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.454 [INFO][4862] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.465 [INFO][4862] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.484 [INFO][4862] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.6/26] block=192.168.103.0/26 handle="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.484 [INFO][4862] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.6/26] handle="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.484 [INFO][4862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:50.544337 containerd[1485]: 2026-04-24 23:36:50.484 [INFO][4862] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.6/26] IPv6=[] ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" HandleID="k8s-pod-network.7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.545914 containerd[1485]: 2026-04-24 23:36:50.491 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2d26b79f-abbb-49bb-8a77-0b8884d8e07b", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"coredns-674b8bbfcf-pcfdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali939d59d0d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:50.545914 containerd[1485]: 2026-04-24 23:36:50.491 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.6/32] ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.545914 containerd[1485]: 2026-04-24 23:36:50.491 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali939d59d0d44 ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.545914 containerd[1485]: 2026-04-24 23:36:50.515 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.545914 containerd[1485]: 2026-04-24 23:36:50.521 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2d26b79f-abbb-49bb-8a77-0b8884d8e07b", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e", Pod:"coredns-674b8bbfcf-pcfdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali939d59d0d44", MAC:"a2:6b:23:ff:69:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:50.545914 containerd[1485]: 2026-04-24 23:36:50.537 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcfdd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:36:50.586890 containerd[1485]: time="2026-04-24T23:36:50.586734129Z" level=info msg="StartContainer for \"b961192389d69b6101091ed77fcc45f3c2a95f48dfc976f3d089f4e1ff93f746\" returns successfully" Apr 24 23:36:50.602414 containerd[1485]: time="2026-04-24T23:36:50.601494913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:50.602414 containerd[1485]: time="2026-04-24T23:36:50.601559913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:50.602414 containerd[1485]: time="2026-04-24T23:36:50.601575073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:50.602414 containerd[1485]: time="2026-04-24T23:36:50.601660672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:50.614489 systemd-networkd[1387]: cali8860d9777c6: Link UP Apr 24 23:36:50.619133 systemd-networkd[1387]: cali8860d9777c6: Gained carrier Apr 24 23:36:50.653608 systemd[1]: Started cri-containerd-7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e.scope - libcontainer container 7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e. Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.367 [INFO][4850] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0 calico-apiserver-5dfb68d68d- calico-system bdd18962-34fd-4ffb-80f3-162e643f9847 984 0 2026-04-24 23:36:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dfb68d68d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a calico-apiserver-5dfb68d68d-klhv4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8860d9777c6 [] [] }} ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.368 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.471 [INFO][4872] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" HandleID="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.495 [INFO][4872] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" HandleID="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f8b20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"calico-apiserver-5dfb68d68d-klhv4", "timestamp":"2026-04-24 23:36:50.471585941 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002b0420)} Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.496 [INFO][4872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.496 [INFO][4872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.496 [INFO][4872] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.507 [INFO][4872] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.529 [INFO][4872] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.546 [INFO][4872] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.553 [INFO][4872] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.558 [INFO][4872] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.558 [INFO][4872] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.564 [INFO][4872] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675 Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.578 [INFO][4872] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.598 [INFO][4872] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.7/26] block=192.168.103.0/26 handle="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.598 [INFO][4872] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.7/26] handle="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.598 [INFO][4872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:50.655136 containerd[1485]: 2026-04-24 23:36:50.598 [INFO][4872] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.7/26] IPv6=[] ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" HandleID="k8s-pod-network.7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.655762 containerd[1485]: 2026-04-24 23:36:50.603 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"bdd18962-34fd-4ffb-80f3-162e643f9847", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"calico-apiserver-5dfb68d68d-klhv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8860d9777c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:50.655762 containerd[1485]: 2026-04-24 23:36:50.603 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.7/32] ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.655762 containerd[1485]: 2026-04-24 23:36:50.603 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8860d9777c6 ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.655762 containerd[1485]: 2026-04-24 23:36:50.621 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.655762 containerd[1485]: 2026-04-24 23:36:50.623 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"bdd18962-34fd-4ffb-80f3-162e643f9847", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675", Pod:"calico-apiserver-5dfb68d68d-klhv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8860d9777c6", MAC:"ce:c7:40:ad:34:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:50.655762 containerd[1485]: 2026-04-24 23:36:50.649 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-klhv4" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:50.708095 containerd[1485]: time="2026-04-24T23:36:50.707980431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcfdd,Uid:2d26b79f-abbb-49bb-8a77-0b8884d8e07b,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e\"" Apr 24 23:36:50.719616 containerd[1485]: time="2026-04-24T23:36:50.719576098Z" level=info msg="CreateContainer within sandbox \"7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 23:36:50.732532 containerd[1485]: time="2026-04-24T23:36:50.732254404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:50.732532 containerd[1485]: time="2026-04-24T23:36:50.732345243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:50.732532 containerd[1485]: time="2026-04-24T23:36:50.732362643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:50.732532 containerd[1485]: time="2026-04-24T23:36:50.732459723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:50.778537 containerd[1485]: time="2026-04-24T23:36:50.778494191Z" level=info msg="CreateContainer within sandbox \"7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f861fcfdae7d6cb0a934c788b26304548c250a6a3b8d132da827ec83e39d6d24\"" Apr 24 23:36:50.779578 systemd[1]: Started cri-containerd-7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675.scope - libcontainer container 7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675. Apr 24 23:36:50.782283 containerd[1485]: time="2026-04-24T23:36:50.780965148Z" level=info msg="StartContainer for \"f861fcfdae7d6cb0a934c788b26304548c250a6a3b8d132da827ec83e39d6d24\"" Apr 24 23:36:50.823650 systemd[1]: Started cri-containerd-f861fcfdae7d6cb0a934c788b26304548c250a6a3b8d132da827ec83e39d6d24.scope - libcontainer container f861fcfdae7d6cb0a934c788b26304548c250a6a3b8d132da827ec83e39d6d24. Apr 24 23:36:50.873927 containerd[1485]: time="2026-04-24T23:36:50.873723882Z" level=info msg="StartContainer for \"f861fcfdae7d6cb0a934c788b26304548c250a6a3b8d132da827ec83e39d6d24\" returns successfully" Apr 24 23:36:50.891721 containerd[1485]: time="2026-04-24T23:36:50.891588422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-klhv4,Uid:bdd18962-34fd-4ffb-80f3-162e643f9847,Namespace:calico-system,Attempt:1,} returns sandbox id \"7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675\"" Apr 24 23:36:51.461071 kubelet[2603]: I0424 23:36:51.460298 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-h97hm" podStartSLOduration=29.618581043 podStartE2EDuration="32.460282486s" podCreationTimestamp="2026-04-24 23:36:19 +0000 UTC" firstStartedPulling="2026-04-24 23:36:47.492990654 +0000 UTC m=+48.589437944" lastFinishedPulling="2026-04-24 23:36:50.334692097 +0000 UTC m=+51.431139387" observedRunningTime="2026-04-24 23:36:51.459957407 +0000 UTC m=+52.556404777" watchObservedRunningTime="2026-04-24 23:36:51.460282486 +0000 UTC m=+52.556729736" Apr 24 23:36:51.556532 systemd-networkd[1387]: cali68b6235bfa4: Gained IPv6LL Apr 24 23:36:52.005110 systemd-networkd[1387]: cali939d59d0d44: Gained IPv6LL Apr 24 23:36:52.021617 containerd[1485]: time="2026-04-24T23:36:52.021579008Z" level=info msg="StopPodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\"" Apr 24 23:36:52.107468 kubelet[2603]: I0424 23:36:52.107370 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pcfdd" podStartSLOduration=47.107309122 podStartE2EDuration="47.107309122s" podCreationTimestamp="2026-04-24 23:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 23:36:51.48501422 +0000 UTC m=+52.581461510" watchObservedRunningTime="2026-04-24 23:36:52.107309122 +0000 UTC m=+53.203756452" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.106 [INFO][5112] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.107 [INFO][5112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" iface="eth0" netns="/var/run/netns/cni-254ab5be-4436-5f4b-47d5-8b095fd78a07" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.108 [INFO][5112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" iface="eth0" netns="/var/run/netns/cni-254ab5be-4436-5f4b-47d5-8b095fd78a07" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.108 [INFO][5112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" iface="eth0" netns="/var/run/netns/cni-254ab5be-4436-5f4b-47d5-8b095fd78a07" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.108 [INFO][5112] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.108 [INFO][5112] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.132 [INFO][5120] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.132 [INFO][5120] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.132 [INFO][5120] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.146 [WARNING][5120] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.146 [INFO][5120] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.149 [INFO][5120] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:52.153692 containerd[1485]: 2026-04-24 23:36:52.151 [INFO][5112] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:36:52.158067 containerd[1485]: time="2026-04-24T23:36:52.155497353Z" level=info msg="TearDown network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" successfully" Apr 24 23:36:52.158067 containerd[1485]: time="2026-04-24T23:36:52.155531033Z" level=info msg="StopPodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" returns successfully" Apr 24 23:36:52.158067 containerd[1485]: time="2026-04-24T23:36:52.157853791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-lvlpx,Uid:da8aa797-c626-438e-9db6-18259f8e1bda,Namespace:calico-system,Attempt:1,}" Apr 24 23:36:52.159249 systemd[1]: run-netns-cni\x2d254ab5be\x2d4436\x2d5f4b\x2d47d5\x2d8b095fd78a07.mount: Deactivated successfully. Apr 24 23:36:52.197084 systemd-networkd[1387]: cali8860d9777c6: Gained IPv6LL Apr 24 23:36:52.350117 systemd-networkd[1387]: cali43f29a15cfa: Link UP Apr 24 23:36:52.350960 systemd-networkd[1387]: cali43f29a15cfa: Gained carrier Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.236 [INFO][5126] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0 calico-apiserver-5dfb68d68d- calico-system da8aa797-c626-438e-9db6-18259f8e1bda 1013 0 2026-04-24 23:36:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dfb68d68d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-3eeab28b3a calico-apiserver-5dfb68d68d-lvlpx eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali43f29a15cfa [] [] }} ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.236 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.270 [INFO][5144] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" HandleID="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.282 [INFO][5144] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" HandleID="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-3eeab28b3a", "pod":"calico-apiserver-5dfb68d68d-lvlpx", "timestamp":"2026-04-24 23:36:52.270724558 +0000 UTC"}, Hostname:"ci-4081-3-6-n-3eeab28b3a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004db080)} Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.282 [INFO][5144] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.283 [INFO][5144] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.283 [INFO][5144] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-3eeab28b3a' Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.287 [INFO][5144] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.294 [INFO][5144] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.302 [INFO][5144] ipam/ipam.go 526: Trying affinity for 192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.305 [INFO][5144] ipam/ipam.go 160: Attempting to load block cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.311 [INFO][5144] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.103.0/26 host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.312 [INFO][5144] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.103.0/26 handle="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.318 [INFO][5144] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.331 [INFO][5144] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.103.0/26 handle="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.342 [INFO][5144] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.103.8/26] block=192.168.103.0/26 handle="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.342 [INFO][5144] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.103.8/26] handle="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" host="ci-4081-3-6-n-3eeab28b3a" Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.342 [INFO][5144] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:52.368764 containerd[1485]: 2026-04-24 23:36:52.343 [INFO][5144] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.103.8/26] IPv6=[] ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" HandleID="k8s-pod-network.09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.370795 containerd[1485]: 2026-04-24 23:36:52.346 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"da8aa797-c626-438e-9db6-18259f8e1bda", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"", Pod:"calico-apiserver-5dfb68d68d-lvlpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f29a15cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:52.370795 containerd[1485]: 2026-04-24 23:36:52.346 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.8/32] ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.370795 containerd[1485]: 2026-04-24 23:36:52.346 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43f29a15cfa ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.370795 containerd[1485]: 2026-04-24 23:36:52.350 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.370795 containerd[1485]: 2026-04-24 23:36:52.352 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"da8aa797-c626-438e-9db6-18259f8e1bda", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c", Pod:"calico-apiserver-5dfb68d68d-lvlpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f29a15cfa", MAC:"8a:ac:3a:58:52:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:52.370795 containerd[1485]: 2026-04-24 23:36:52.365 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c" Namespace="calico-system" Pod="calico-apiserver-5dfb68d68d-lvlpx" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:36:52.392494 containerd[1485]: time="2026-04-24T23:36:52.392038316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 24 23:36:52.392494 containerd[1485]: time="2026-04-24T23:36:52.392110116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 24 23:36:52.392494 containerd[1485]: time="2026-04-24T23:36:52.392132996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:52.393035 containerd[1485]: time="2026-04-24T23:36:52.392885875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 24 23:36:52.423604 systemd[1]: Started cri-containerd-09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c.scope - libcontainer container 09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c. Apr 24 23:36:52.492640 containerd[1485]: time="2026-04-24T23:36:52.492523496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dfb68d68d-lvlpx,Uid:da8aa797-c626-438e-9db6-18259f8e1bda,Namespace:calico-system,Attempt:1,} returns sandbox id \"09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c\"" Apr 24 23:36:53.813426 containerd[1485]: time="2026-04-24T23:36:53.812468024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.814040 containerd[1485]: time="2026-04-24T23:36:53.814006662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 24 23:36:53.815366 containerd[1485]: time="2026-04-24T23:36:53.815256261Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.818949 containerd[1485]: time="2026-04-24T23:36:53.818765738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:53.820073 containerd[1485]: time="2026-04-24T23:36:53.820029936Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.480953084s" Apr 24 23:36:53.820132 containerd[1485]: time="2026-04-24T23:36:53.820072976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 24 23:36:53.822363 containerd[1485]: time="2026-04-24T23:36:53.822168814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 24 23:36:53.840590 containerd[1485]: time="2026-04-24T23:36:53.840436157Z" level=info msg="CreateContainer within sandbox \"3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 23:36:53.860843 containerd[1485]: time="2026-04-24T23:36:53.860794578Z" level=info msg="CreateContainer within sandbox \"3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dcdd5594b895bcb09a4cf3741cee0cb7edaa27e34506f58803d8a03a078092c1\"" Apr 24 23:36:53.864224 containerd[1485]: time="2026-04-24T23:36:53.863575456Z" level=info msg="StartContainer for \"dcdd5594b895bcb09a4cf3741cee0cb7edaa27e34506f58803d8a03a078092c1\"" Apr 24 23:36:53.915806 systemd[1]: Started cri-containerd-dcdd5594b895bcb09a4cf3741cee0cb7edaa27e34506f58803d8a03a078092c1.scope - libcontainer container dcdd5594b895bcb09a4cf3741cee0cb7edaa27e34506f58803d8a03a078092c1. Apr 24 23:36:53.953570 containerd[1485]: time="2026-04-24T23:36:53.953450731Z" level=info msg="StartContainer for \"dcdd5594b895bcb09a4cf3741cee0cb7edaa27e34506f58803d8a03a078092c1\" returns successfully" Apr 24 23:36:54.372685 systemd-networkd[1387]: cali43f29a15cfa: Gained IPv6LL Apr 24 23:36:54.512572 kubelet[2603]: I0424 23:36:54.512509 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5dfdb6b6fc-lsx28" podStartSLOduration=27.219735487 podStartE2EDuration="32.512490756s" podCreationTimestamp="2026-04-24 23:36:22 +0000 UTC" firstStartedPulling="2026-04-24 23:36:48.528811626 +0000 UTC m=+49.625258916" lastFinishedPulling="2026-04-24 23:36:53.821566895 +0000 UTC m=+54.918014185" observedRunningTime="2026-04-24 23:36:54.477929706 +0000 UTC m=+55.574376996" watchObservedRunningTime="2026-04-24 23:36:54.512490756 +0000 UTC m=+55.608938046" Apr 24 23:36:55.322699 containerd[1485]: time="2026-04-24T23:36:55.321867541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:55.324109 containerd[1485]: time="2026-04-24T23:36:55.324076699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 24 23:36:55.325220 containerd[1485]: time="2026-04-24T23:36:55.325187298Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:55.328624 containerd[1485]: time="2026-04-24T23:36:55.328472775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:36:55.329165 containerd[1485]: time="2026-04-24T23:36:55.329053375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.506847761s" Apr 24 23:36:55.329165 containerd[1485]: time="2026-04-24T23:36:55.329085775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 24 23:36:55.331562 containerd[1485]: time="2026-04-24T23:36:55.331542573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:36:55.336495 containerd[1485]: time="2026-04-24T23:36:55.336452849Z" level=info msg="CreateContainer within sandbox \"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 23:36:55.356197 containerd[1485]: time="2026-04-24T23:36:55.356142432Z" level=info msg="CreateContainer within sandbox \"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"93e3c015baf1505ff817322640625421deed3dd931b5e1b31274ab7ea62904a6\"" Apr 24 23:36:55.360378 containerd[1485]: time="2026-04-24T23:36:55.358688030Z" level=info msg="StartContainer for \"93e3c015baf1505ff817322640625421deed3dd931b5e1b31274ab7ea62904a6\"" Apr 24 23:36:55.416599 systemd[1]: Started cri-containerd-93e3c015baf1505ff817322640625421deed3dd931b5e1b31274ab7ea62904a6.scope - libcontainer container 93e3c015baf1505ff817322640625421deed3dd931b5e1b31274ab7ea62904a6. Apr 24 23:36:55.452668 containerd[1485]: time="2026-04-24T23:36:55.452620513Z" level=info msg="StartContainer for \"93e3c015baf1505ff817322640625421deed3dd931b5e1b31274ab7ea62904a6\" returns successfully" Apr 24 23:36:59.035075 containerd[1485]: time="2026-04-24T23:36:59.035018955Z" level=info msg="StopPodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\"" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.078 [WARNING][5364] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"644bb617-7ecf-48f1-9ddf-e7a4ce31159a", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0", Pod:"coredns-674b8bbfcf-f4wxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdc7cb8f8ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.079 [INFO][5364] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.079 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" iface="eth0" netns="" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.079 [INFO][5364] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.079 [INFO][5364] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.116 [INFO][5371] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.116 [INFO][5371] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.116 [INFO][5371] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.128 [WARNING][5371] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.128 [INFO][5371] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.130 [INFO][5371] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.136144 containerd[1485]: 2026-04-24 23:36:59.133 [INFO][5364] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.137830 containerd[1485]: time="2026-04-24T23:36:59.136198709Z" level=info msg="TearDown network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" successfully" Apr 24 23:36:59.137830 containerd[1485]: time="2026-04-24T23:36:59.136252512Z" level=info msg="StopPodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" returns successfully" Apr 24 23:36:59.140002 containerd[1485]: time="2026-04-24T23:36:59.139928095Z" level=info msg="RemovePodSandbox for \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\"" Apr 24 23:36:59.144635 containerd[1485]: time="2026-04-24T23:36:59.144596087Z" level=info msg="Forcibly stopping sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\"" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.192 [WARNING][5385] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"644bb617-7ecf-48f1-9ddf-e7a4ce31159a", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"674420a126f54ebc17c085aff5feff1ceb5f5e7d7b69829e3ba759d6ceb955b0", Pod:"coredns-674b8bbfcf-f4wxb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdc7cb8f8ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.192 [INFO][5385] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.192 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" iface="eth0" netns="" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.193 [INFO][5385] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.193 [INFO][5385] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.225 [INFO][5400] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.225 [INFO][5400] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.225 [INFO][5400] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.239 [WARNING][5400] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.240 [INFO][5400] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" HandleID="k8s-pod-network.4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--f4wxb-eth0" Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.242 [INFO][5400] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.245955 containerd[1485]: 2026-04-24 23:36:59.244 [INFO][5385] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09" Apr 24 23:36:59.246972 containerd[1485]: time="2026-04-24T23:36:59.246069256Z" level=info msg="TearDown network for sandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" successfully" Apr 24 23:36:59.263032 containerd[1485]: time="2026-04-24T23:36:59.262748486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:36:59.263032 containerd[1485]: time="2026-04-24T23:36:59.262876452Z" level=info msg="RemovePodSandbox \"4d9291a66b5fd8d97e2c0b5bfba5db46a37b3ad143fe2be01d2bf60525cf7a09\" returns successfully" Apr 24 23:36:59.263988 containerd[1485]: time="2026-04-24T23:36:59.263937065Z" level=info msg="StopPodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\"" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.311 [WARNING][5417] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"bdd18962-34fd-4ffb-80f3-162e643f9847", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675", Pod:"calico-apiserver-5dfb68d68d-klhv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8860d9777c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.311 [INFO][5417] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.312 [INFO][5417] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" iface="eth0" netns="" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.312 [INFO][5417] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.312 [INFO][5417] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.351 [INFO][5424] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.351 [INFO][5424] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.351 [INFO][5424] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.364 [WARNING][5424] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.364 [INFO][5424] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.366 [INFO][5424] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.378043 containerd[1485]: 2026-04-24 23:36:59.373 [INFO][5417] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.380040 containerd[1485]: time="2026-04-24T23:36:59.377961298Z" level=info msg="TearDown network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" successfully" Apr 24 23:36:59.380040 containerd[1485]: time="2026-04-24T23:36:59.379368568Z" level=info msg="StopPodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" returns successfully" Apr 24 23:36:59.380406 containerd[1485]: time="2026-04-24T23:36:59.380322495Z" level=info msg="RemovePodSandbox for \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\"" Apr 24 23:36:59.380508 containerd[1485]: time="2026-04-24T23:36:59.380492744Z" level=info msg="Forcibly stopping sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\"" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.434 [WARNING][5440] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"bdd18962-34fd-4ffb-80f3-162e643f9847", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675", Pod:"calico-apiserver-5dfb68d68d-klhv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8860d9777c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.434 [INFO][5440] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.434 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" iface="eth0" netns="" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.434 [INFO][5440] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.434 [INFO][5440] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.465 [INFO][5451] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.465 [INFO][5451] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.465 [INFO][5451] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.484 [WARNING][5451] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.484 [INFO][5451] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" HandleID="k8s-pod-network.c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--klhv4-eth0" Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.488 [INFO][5451] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.495774 containerd[1485]: 2026-04-24 23:36:59.491 [INFO][5440] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5" Apr 24 23:36:59.497872 containerd[1485]: time="2026-04-24T23:36:59.495970809Z" level=info msg="TearDown network for sandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" successfully" Apr 24 23:36:59.501619 containerd[1485]: time="2026-04-24T23:36:59.501457042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:36:59.501902 containerd[1485]: time="2026-04-24T23:36:59.501859622Z" level=info msg="RemovePodSandbox \"c210f1133be654bd01a2851af1289d8d4a740038db5cb26740df3a53dcf478d5\" returns successfully" Apr 24 23:36:59.502761 containerd[1485]: time="2026-04-24T23:36:59.502734385Z" level=info msg="StopPodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\"" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.558 [WARNING][5466] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0", GenerateName:"calico-kube-controllers-5dfdb6b6fc-", Namespace:"calico-system", SelfLink:"", UID:"9bdb0b4f-d6e3-4c87-82ef-c529db27e927", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfdb6b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13", Pod:"calico-kube-controllers-5dfdb6b6fc-lsx28", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif44c8ca205a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.558 [INFO][5466] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.558 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" iface="eth0" netns="" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.558 [INFO][5466] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.558 [INFO][5466] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.586 [INFO][5474] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.586 [INFO][5474] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.586 [INFO][5474] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.597 [WARNING][5474] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.597 [INFO][5474] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.600 [INFO][5474] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.609253 containerd[1485]: 2026-04-24 23:36:59.605 [INFO][5466] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.609928 containerd[1485]: time="2026-04-24T23:36:59.609892277Z" level=info msg="TearDown network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" successfully" Apr 24 23:36:59.610021 containerd[1485]: time="2026-04-24T23:36:59.610007283Z" level=info msg="StopPodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" returns successfully" Apr 24 23:36:59.610921 containerd[1485]: time="2026-04-24T23:36:59.610754600Z" level=info msg="RemovePodSandbox for \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\"" Apr 24 23:36:59.611004 containerd[1485]: time="2026-04-24T23:36:59.610928768Z" level=info msg="Forcibly stopping sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\"" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.671 [WARNING][5488] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0", GenerateName:"calico-kube-controllers-5dfdb6b6fc-", Namespace:"calico-system", SelfLink:"", UID:"9bdb0b4f-d6e3-4c87-82ef-c529db27e927", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfdb6b6fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"3d646bcada02f25f32aa0681c2d4038e49430e895c49b12eb73c3c584d646b13", Pod:"calico-kube-controllers-5dfdb6b6fc-lsx28", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif44c8ca205a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.672 [INFO][5488] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.672 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" iface="eth0" netns="" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.672 [INFO][5488] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.672 [INFO][5488] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.697 [INFO][5496] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.697 [INFO][5496] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.697 [INFO][5496] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.714 [WARNING][5496] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.714 [INFO][5496] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" HandleID="k8s-pod-network.91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--kube--controllers--5dfdb6b6fc--lsx28-eth0" Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.716 [INFO][5496] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.721712 containerd[1485]: 2026-04-24 23:36:59.719 [INFO][5488] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609" Apr 24 23:36:59.722306 containerd[1485]: time="2026-04-24T23:36:59.722275068Z" level=info msg="TearDown network for sandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" successfully" Apr 24 23:36:59.728688 containerd[1485]: time="2026-04-24T23:36:59.728644785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:36:59.728934 containerd[1485]: time="2026-04-24T23:36:59.728911278Z" level=info msg="RemovePodSandbox \"91de63ab99b8825e602059fa2f5c6599eabd3fcbe85adb9ae72eaab23d169609\" returns successfully" Apr 24 23:36:59.730193 containerd[1485]: time="2026-04-24T23:36:59.729916288Z" level=info msg="StopPodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\"" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.785 [WARNING][5510] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408", Pod:"goldmane-5b85766d88-h97hm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia08fdbb1449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.786 [INFO][5510] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.786 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" iface="eth0" netns="" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.786 [INFO][5510] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.786 [INFO][5510] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.819 [INFO][5517] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.819 [INFO][5517] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.819 [INFO][5517] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.834 [WARNING][5517] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.834 [INFO][5517] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.836 [INFO][5517] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.843474 containerd[1485]: 2026-04-24 23:36:59.841 [INFO][5510] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.844520 containerd[1485]: time="2026-04-24T23:36:59.844114370Z" level=info msg="TearDown network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" successfully" Apr 24 23:36:59.844520 containerd[1485]: time="2026-04-24T23:36:59.844146572Z" level=info msg="StopPodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" returns successfully" Apr 24 23:36:59.845283 containerd[1485]: time="2026-04-24T23:36:59.844967972Z" level=info msg="RemovePodSandbox for \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\"" Apr 24 23:36:59.845283 containerd[1485]: time="2026-04-24T23:36:59.845000494Z" level=info msg="Forcibly stopping sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\"" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.900 [WARNING][5531] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"1b9d7692-7e14-40cd-a26e-6b56dd9c7d2d", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"6020349d5607c2bb2357a61ec43ec362a22c540437e00a3428d57d237ade6408", Pod:"goldmane-5b85766d88-h97hm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.103.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia08fdbb1449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.900 [INFO][5531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.900 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" iface="eth0" netns="" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.900 [INFO][5531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.900 [INFO][5531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.925 [INFO][5538] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.925 [INFO][5538] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.925 [INFO][5538] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.939 [WARNING][5538] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.939 [INFO][5538] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" HandleID="k8s-pod-network.d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-goldmane--5b85766d88--h97hm-eth0" Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.942 [INFO][5538] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:36:59.946391 containerd[1485]: 2026-04-24 23:36:59.944 [INFO][5531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806" Apr 24 23:36:59.946823 containerd[1485]: time="2026-04-24T23:36:59.946436981Z" level=info msg="TearDown network for sandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" successfully" Apr 24 23:36:59.953814 containerd[1485]: time="2026-04-24T23:36:59.953643659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:36:59.953932 containerd[1485]: time="2026-04-24T23:36:59.953854230Z" level=info msg="RemovePodSandbox \"d6583c7fc529bda2afb9402422ebf156ad347d42a5c20876d16dce0dd3630806\" returns successfully" Apr 24 23:36:59.954592 containerd[1485]: time="2026-04-24T23:36:59.954568265Z" level=info msg="StopPodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\"" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.008 [WARNING][5553] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.008 [INFO][5553] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.008 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" iface="eth0" netns="" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.008 [INFO][5553] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.008 [INFO][5553] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.038 [INFO][5560] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.038 [INFO][5560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.038 [INFO][5560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.056 [WARNING][5560] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.056 [INFO][5560] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.059 [INFO][5560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.066697 containerd[1485]: 2026-04-24 23:37:00.063 [INFO][5553] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.067981 containerd[1485]: time="2026-04-24T23:37:00.067422747Z" level=info msg="TearDown network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" successfully" Apr 24 23:37:00.067981 containerd[1485]: time="2026-04-24T23:37:00.067457909Z" level=info msg="StopPodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" returns successfully" Apr 24 23:37:00.068520 containerd[1485]: time="2026-04-24T23:37:00.068484439Z" level=info msg="RemovePodSandbox for \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\"" Apr 24 23:37:00.068623 containerd[1485]: time="2026-04-24T23:37:00.068519960Z" level=info msg="Forcibly stopping sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\"" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.146 [WARNING][5574] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" WorkloadEndpoint="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.146 [INFO][5574] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.146 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" iface="eth0" netns="" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.146 [INFO][5574] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.146 [INFO][5574] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.179 [INFO][5581] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.179 [INFO][5581] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.179 [INFO][5581] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.192 [WARNING][5581] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.192 [INFO][5581] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" HandleID="k8s-pod-network.94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-whisker--6d5557b47c--msgpn-eth0" Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.195 [INFO][5581] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.199532 containerd[1485]: 2026-04-24 23:37:00.197 [INFO][5574] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd" Apr 24 23:37:00.199907 containerd[1485]: time="2026-04-24T23:37:00.199574660Z" level=info msg="TearDown network for sandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" successfully" Apr 24 23:37:00.212128 containerd[1485]: time="2026-04-24T23:37:00.212046903Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:00.213033 containerd[1485]: time="2026-04-24T23:37:00.212135907Z" level=info msg="RemovePodSandbox \"94e3e5541d595f351c80a3298fade5a21c94626a73268e5ff65a6709dc15b2dd\" returns successfully" Apr 24 23:37:00.213033 containerd[1485]: time="2026-04-24T23:37:00.212980388Z" level=info msg="StopPodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\"" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.274 [WARNING][5596] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2d26b79f-abbb-49bb-8a77-0b8884d8e07b", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e", Pod:"coredns-674b8bbfcf-pcfdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali939d59d0d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.274 [INFO][5596] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.274 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" iface="eth0" netns="" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.274 [INFO][5596] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.274 [INFO][5596] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.310 [INFO][5603] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.310 [INFO][5603] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.310 [INFO][5603] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.346 [WARNING][5603] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.346 [INFO][5603] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.349 [INFO][5603] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.355411 containerd[1485]: 2026-04-24 23:37:00.352 [INFO][5596] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.355411 containerd[1485]: time="2026-04-24T23:37:00.355203187Z" level=info msg="TearDown network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" successfully" Apr 24 23:37:00.355411 containerd[1485]: time="2026-04-24T23:37:00.355245109Z" level=info msg="StopPodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" returns successfully" Apr 24 23:37:00.357311 containerd[1485]: time="2026-04-24T23:37:00.357128801Z" level=info msg="RemovePodSandbox for \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\"" Apr 24 23:37:00.357311 containerd[1485]: time="2026-04-24T23:37:00.357163842Z" level=info msg="Forcibly stopping sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\"" Apr 24 23:37:00.455396 containerd[1485]: time="2026-04-24T23:37:00.454234138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:00.456648 containerd[1485]: time="2026-04-24T23:37:00.456434084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 24 23:37:00.457880 containerd[1485]: time="2026-04-24T23:37:00.457527937Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:00.461639 containerd[1485]: time="2026-04-24T23:37:00.461531131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:00.464013 containerd[1485]: time="2026-04-24T23:37:00.463981129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 5.132318316s" Apr 24 23:37:00.464349 containerd[1485]: time="2026-04-24T23:37:00.464019131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:37:00.466151 containerd[1485]: time="2026-04-24T23:37:00.466103592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 24 23:37:00.468935 containerd[1485]: time="2026-04-24T23:37:00.468894727Z" level=info msg="CreateContainer within sandbox \"7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.425 [WARNING][5617] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2d26b79f-abbb-49bb-8a77-0b8884d8e07b", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"7ce6551a442d75e0254bd47ba0787a32ab163fbc55608b19c73a22b92f98cc9e", Pod:"coredns-674b8bbfcf-pcfdd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali939d59d0d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.425 [INFO][5617] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.425 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" iface="eth0" netns="" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.425 [INFO][5617] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.425 [INFO][5617] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.448 [INFO][5624] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.449 [INFO][5624] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.449 [INFO][5624] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.463 [WARNING][5624] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.464 [INFO][5624] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" HandleID="k8s-pod-network.2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-coredns--674b8bbfcf--pcfdd-eth0" Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.467 [INFO][5624] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.473123 containerd[1485]: 2026-04-24 23:37:00.471 [INFO][5617] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5" Apr 24 23:37:00.475194 containerd[1485]: time="2026-04-24T23:37:00.473494189Z" level=info msg="TearDown network for sandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" successfully" Apr 24 23:37:00.484429 containerd[1485]: time="2026-04-24T23:37:00.484382796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:00.484716 containerd[1485]: time="2026-04-24T23:37:00.484678130Z" level=info msg="RemovePodSandbox \"2cd962c29ac1a9e98c8fcba9c51ba8495d1c621101c355508867b23e08db4ba5\" returns successfully" Apr 24 23:37:00.489913 containerd[1485]: time="2026-04-24T23:37:00.489464762Z" level=info msg="StopPodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\"" Apr 24 23:37:00.492441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652218193.mount: Deactivated successfully. Apr 24 23:37:00.499252 containerd[1485]: time="2026-04-24T23:37:00.498462837Z" level=info msg="CreateContainer within sandbox \"7a629d59335db3d81377e1109ccdfc6eb263761de5a1dd35c61385be9b8d1675\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6438437c80d20881d4d6fd25676e91d31cb6069de562259f8905299b5662ac5d\"" Apr 24 23:37:00.503467 containerd[1485]: time="2026-04-24T23:37:00.503287910Z" level=info msg="StartContainer for \"6438437c80d20881d4d6fd25676e91d31cb6069de562259f8905299b5662ac5d\"" Apr 24 23:37:00.560527 systemd[1]: Started cri-containerd-6438437c80d20881d4d6fd25676e91d31cb6069de562259f8905299b5662ac5d.scope - libcontainer container 6438437c80d20881d4d6fd25676e91d31cb6069de562259f8905299b5662ac5d. Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.566 [WARNING][5643] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"da8aa797-c626-438e-9db6-18259f8e1bda", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c", Pod:"calico-apiserver-5dfb68d68d-lvlpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f29a15cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.566 [INFO][5643] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.566 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" iface="eth0" netns="" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.566 [INFO][5643] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.566 [INFO][5643] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.594 [INFO][5668] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.594 [INFO][5668] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.594 [INFO][5668] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.606 [WARNING][5668] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.606 [INFO][5668] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.608 [INFO][5668] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.614575 containerd[1485]: 2026-04-24 23:37:00.611 [INFO][5643] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.614575 containerd[1485]: time="2026-04-24T23:37:00.614540532Z" level=info msg="TearDown network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" successfully" Apr 24 23:37:00.614575 containerd[1485]: time="2026-04-24T23:37:00.614566733Z" level=info msg="StopPodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" returns successfully" Apr 24 23:37:00.617809 containerd[1485]: time="2026-04-24T23:37:00.617367588Z" level=info msg="RemovePodSandbox for \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\"" Apr 24 23:37:00.617809 containerd[1485]: time="2026-04-24T23:37:00.617425631Z" level=info msg="Forcibly stopping sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\"" Apr 24 23:37:00.629153 containerd[1485]: time="2026-04-24T23:37:00.627620004Z" level=info msg="StartContainer for \"6438437c80d20881d4d6fd25676e91d31cb6069de562259f8905299b5662ac5d\" returns successfully" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.677 [WARNING][5701] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0", GenerateName:"calico-apiserver-5dfb68d68d-", Namespace:"calico-system", SelfLink:"", UID:"da8aa797-c626-438e-9db6-18259f8e1bda", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dfb68d68d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c", Pod:"calico-apiserver-5dfb68d68d-lvlpx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.103.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali43f29a15cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.678 [INFO][5701] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.678 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" iface="eth0" netns="" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.678 [INFO][5701] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.678 [INFO][5701] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.706 [INFO][5709] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.706 [INFO][5709] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.707 [INFO][5709] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.718 [WARNING][5709] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.718 [INFO][5709] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" HandleID="k8s-pod-network.b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-calico--apiserver--5dfb68d68d--lvlpx-eth0" Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.722 [INFO][5709] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.730419 containerd[1485]: 2026-04-24 23:37:00.727 [INFO][5701] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b" Apr 24 23:37:00.731124 containerd[1485]: time="2026-04-24T23:37:00.730459699Z" level=info msg="TearDown network for sandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" successfully" Apr 24 23:37:00.735475 containerd[1485]: time="2026-04-24T23:37:00.735428019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:00.735692 containerd[1485]: time="2026-04-24T23:37:00.735518544Z" level=info msg="RemovePodSandbox \"b0af18a9ee2d5dc9c08403bd41579b747405db800fb6a09a7c176e456c30685b\" returns successfully" Apr 24 23:37:00.736432 containerd[1485]: time="2026-04-24T23:37:00.736091011Z" level=info msg="StopPodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\"" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.780 [WARNING][5729] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459", Pod:"csi-node-driver-56znh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68b6235bfa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.780 [INFO][5729] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.780 [INFO][5729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" iface="eth0" netns="" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.780 [INFO][5729] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.780 [INFO][5729] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.806 [INFO][5736] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.806 [INFO][5736] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.806 [INFO][5736] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.821 [WARNING][5736] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.821 [INFO][5736] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.824 [INFO][5736] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.829008 containerd[1485]: 2026-04-24 23:37:00.826 [INFO][5729] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.829944 containerd[1485]: time="2026-04-24T23:37:00.829445967Z" level=info msg="TearDown network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" successfully" Apr 24 23:37:00.829944 containerd[1485]: time="2026-04-24T23:37:00.829522611Z" level=info msg="StopPodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" returns successfully" Apr 24 23:37:00.830769 containerd[1485]: time="2026-04-24T23:37:00.830436215Z" level=info msg="RemovePodSandbox for \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\"" Apr 24 23:37:00.830769 containerd[1485]: time="2026-04-24T23:37:00.830469136Z" level=info msg="Forcibly stopping sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\"" Apr 24 23:37:00.861189 containerd[1485]: time="2026-04-24T23:37:00.860674277Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:00.864741 containerd[1485]: time="2026-04-24T23:37:00.864608348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 24 23:37:00.868976 containerd[1485]: time="2026-04-24T23:37:00.868937197Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 402.786323ms" Apr 24 23:37:00.869310 containerd[1485]: time="2026-04-24T23:37:00.869288214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 24 23:37:00.872148 containerd[1485]: time="2026-04-24T23:37:00.872118711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 24 23:37:00.875845 containerd[1485]: time="2026-04-24T23:37:00.875771808Z" level=info msg="CreateContainer within sandbox \"09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 23:37:00.902819 containerd[1485]: time="2026-04-24T23:37:00.902704510Z" level=info msg="CreateContainer within sandbox \"09545cf981439d5a0c923970b47a4cf6aa8d25ca224d5ecee73e119ebb28fd5c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"82395774a66012b09d4f71ab5ada054b437a84b216270d6c2e635f8b918d8fa3\"" Apr 24 23:37:00.905440 containerd[1485]: time="2026-04-24T23:37:00.904707687Z" level=info msg="StartContainer for \"82395774a66012b09d4f71ab5ada054b437a84b216270d6c2e635f8b918d8fa3\"" Apr 24 23:37:00.957862 systemd[1]: Started cri-containerd-82395774a66012b09d4f71ab5ada054b437a84b216270d6c2e635f8b918d8fa3.scope - libcontainer container 82395774a66012b09d4f71ab5ada054b437a84b216270d6c2e635f8b918d8fa3. Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.889 [WARNING][5750] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e6fb3e8-3a5f-4261-a733-c3a9f69c9b13", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 23, 36, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-3eeab28b3a", ContainerID:"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459", Pod:"csi-node-driver-56znh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali68b6235bfa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.889 [INFO][5750] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.889 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" iface="eth0" netns="" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.889 [INFO][5750] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.889 [INFO][5750] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.956 [INFO][5757] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.957 [INFO][5757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.957 [INFO][5757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.970 [WARNING][5757] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.970 [INFO][5757] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" HandleID="k8s-pod-network.8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Workload="ci--4081--3--6--n--3eeab28b3a-k8s-csi--node--driver--56znh-eth0" Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.972 [INFO][5757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 23:37:00.981643 containerd[1485]: 2026-04-24 23:37:00.976 [INFO][5750] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4" Apr 24 23:37:00.981643 containerd[1485]: time="2026-04-24T23:37:00.981502162Z" level=info msg="TearDown network for sandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" successfully" Apr 24 23:37:00.990398 containerd[1485]: time="2026-04-24T23:37:00.989459427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 24 23:37:00.990664 containerd[1485]: time="2026-04-24T23:37:00.990565640Z" level=info msg="RemovePodSandbox \"8d95c213f6eba34f6f9d4fe73deb643071c25055f95b7a7a2ae3cebff2e583c4\" returns successfully" Apr 24 23:37:01.017765 containerd[1485]: time="2026-04-24T23:37:01.015471825Z" level=info msg="StartContainer for \"82395774a66012b09d4f71ab5ada054b437a84b216270d6c2e635f8b918d8fa3\" returns successfully" Apr 24 23:37:01.552030 kubelet[2603]: I0424 23:37:01.551782 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5dfb68d68d-klhv4" podStartSLOduration=31.980699347 podStartE2EDuration="41.551763207s" podCreationTimestamp="2026-04-24 23:36:20 +0000 UTC" firstStartedPulling="2026-04-24 23:36:50.894363499 +0000 UTC m=+51.990810749" lastFinishedPulling="2026-04-24 23:37:00.465427239 +0000 UTC m=+61.561874609" observedRunningTime="2026-04-24 23:37:01.550142851 +0000 UTC m=+62.646590141" watchObservedRunningTime="2026-04-24 23:37:01.551763207 +0000 UTC m=+62.648210457" Apr 24 23:37:01.553109 kubelet[2603]: I0424 23:37:01.552554 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5dfb68d68d-lvlpx" podStartSLOduration=33.176825876 podStartE2EDuration="41.552544844s" podCreationTimestamp="2026-04-24 23:36:20 +0000 UTC" firstStartedPulling="2026-04-24 23:36:52.495362853 +0000 UTC m=+53.591810103" lastFinishedPulling="2026-04-24 23:37:00.871081781 +0000 UTC m=+61.967529071" observedRunningTime="2026-04-24 23:37:01.535795656 +0000 UTC m=+62.632242946" watchObservedRunningTime="2026-04-24 23:37:01.552544844 +0000 UTC m=+62.648992294" Apr 24 23:37:02.538397 kubelet[2603]: I0424 23:37:02.538360 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:02.539629 kubelet[2603]: I0424 23:37:02.538423 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:02.590270 containerd[1485]: time="2026-04-24T23:37:02.590092517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:02.592601 containerd[1485]: time="2026-04-24T23:37:02.592111849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 24 23:37:02.594599 containerd[1485]: time="2026-04-24T23:37:02.594419675Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:02.602374 containerd[1485]: time="2026-04-24T23:37:02.601695448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 23:37:02.602710 containerd[1485]: time="2026-04-24T23:37:02.602581888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.73030133s" Apr 24 23:37:02.602710 containerd[1485]: time="2026-04-24T23:37:02.602623210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 24 23:37:02.607649 containerd[1485]: time="2026-04-24T23:37:02.607596118Z" level=info msg="CreateContainer within sandbox \"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 23:37:02.643804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2713323796.mount: Deactivated successfully. Apr 24 23:37:02.656552 containerd[1485]: time="2026-04-24T23:37:02.656058214Z" level=info msg="CreateContainer within sandbox \"28a7391bf306f7878023387923ccfc557f27cbfa4d09e7d77982537815981459\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fa4e9fef11b65b3bf599c506f75ea79b28392216f5f7e8407481c3e671e3d1c9\"" Apr 24 23:37:02.658959 containerd[1485]: time="2026-04-24T23:37:02.658712535Z" level=info msg="StartContainer for \"fa4e9fef11b65b3bf599c506f75ea79b28392216f5f7e8407481c3e671e3d1c9\"" Apr 24 23:37:02.710567 systemd[1]: Started cri-containerd-fa4e9fef11b65b3bf599c506f75ea79b28392216f5f7e8407481c3e671e3d1c9.scope - libcontainer container fa4e9fef11b65b3bf599c506f75ea79b28392216f5f7e8407481c3e671e3d1c9. Apr 24 23:37:02.788712 containerd[1485]: time="2026-04-24T23:37:02.788592275Z" level=info msg="StartContainer for \"fa4e9fef11b65b3bf599c506f75ea79b28392216f5f7e8407481c3e671e3d1c9\" returns successfully" Apr 24 23:37:03.148669 kubelet[2603]: I0424 23:37:03.148293 2603 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 23:37:03.148669 kubelet[2603]: I0424 23:37:03.148591 2603 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 23:37:03.559037 kubelet[2603]: I0424 23:37:03.558948 2603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-56znh" podStartSLOduration=28.654828825 podStartE2EDuration="41.558670953s" podCreationTimestamp="2026-04-24 23:36:22 +0000 UTC" firstStartedPulling="2026-04-24 23:36:49.699690644 +0000 UTC m=+50.796137934" lastFinishedPulling="2026-04-24 23:37:02.603532772 +0000 UTC m=+63.699980062" observedRunningTime="2026-04-24 23:37:03.558443023 +0000 UTC m=+64.654890393" watchObservedRunningTime="2026-04-24 23:37:03.558670953 +0000 UTC m=+64.655118283" Apr 24 23:37:19.268500 kubelet[2603]: I0424 23:37:19.267763 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:37:23.245937 kubelet[2603]: I0424 23:37:23.244675 2603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 23:38:24.418662 systemd[1]: Started sshd@7-178.105.26.190:22-50.85.169.122:38688.service - OpenSSH per-connection server daemon (50.85.169.122:38688). Apr 24 23:38:24.551031 sshd[6169]: Accepted publickey for core from 50.85.169.122 port 38688 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:24.553817 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:24.560529 systemd-logind[1462]: New session 8 of user core. Apr 24 23:38:24.566790 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 23:38:24.761913 sshd[6169]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:24.768917 systemd[1]: sshd@7-178.105.26.190:22-50.85.169.122:38688.service: Deactivated successfully. Apr 24 23:38:24.771762 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 23:38:24.772618 systemd-logind[1462]: Session 8 logged out. Waiting for processes to exit. Apr 24 23:38:24.773593 systemd-logind[1462]: Removed session 8. Apr 24 23:38:29.797880 systemd[1]: Started sshd@8-178.105.26.190:22-50.85.169.122:59408.service - OpenSSH per-connection server daemon (50.85.169.122:59408). Apr 24 23:38:29.914619 sshd[6202]: Accepted publickey for core from 50.85.169.122 port 59408 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:29.917304 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:29.922826 systemd-logind[1462]: New session 9 of user core. Apr 24 23:38:29.927645 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 23:38:30.106191 sshd[6202]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:30.111177 systemd[1]: sshd@8-178.105.26.190:22-50.85.169.122:59408.service: Deactivated successfully. Apr 24 23:38:30.115875 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 23:38:30.118977 systemd-logind[1462]: Session 9 logged out. Waiting for processes to exit. Apr 24 23:38:30.120198 systemd-logind[1462]: Removed session 9. Apr 24 23:38:35.142740 systemd[1]: Started sshd@9-178.105.26.190:22-50.85.169.122:59422.service - OpenSSH per-connection server daemon (50.85.169.122:59422). Apr 24 23:38:35.263470 sshd[6235]: Accepted publickey for core from 50.85.169.122 port 59422 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:35.265446 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:35.273392 systemd-logind[1462]: New session 10 of user core. Apr 24 23:38:35.282663 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 23:38:35.466238 sshd[6235]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:35.471563 systemd[1]: sshd@9-178.105.26.190:22-50.85.169.122:59422.service: Deactivated successfully. Apr 24 23:38:35.474503 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 23:38:35.475416 systemd-logind[1462]: Session 10 logged out. Waiting for processes to exit. Apr 24 23:38:35.476980 systemd-logind[1462]: Removed session 10. Apr 24 23:38:40.495688 systemd[1]: Started sshd@10-178.105.26.190:22-50.85.169.122:54644.service - OpenSSH per-connection server daemon (50.85.169.122:54644). Apr 24 23:38:40.628356 sshd[6288]: Accepted publickey for core from 50.85.169.122 port 54644 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:40.632248 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:40.638654 systemd-logind[1462]: New session 11 of user core. Apr 24 23:38:40.643540 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 23:38:40.829926 sshd[6288]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:40.837060 systemd[1]: sshd@10-178.105.26.190:22-50.85.169.122:54644.service: Deactivated successfully. Apr 24 23:38:40.841101 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 23:38:40.843977 systemd-logind[1462]: Session 11 logged out. Waiting for processes to exit. Apr 24 23:38:40.861434 systemd[1]: Started sshd@11-178.105.26.190:22-50.85.169.122:54650.service - OpenSSH per-connection server daemon (50.85.169.122:54650). Apr 24 23:38:40.862455 systemd-logind[1462]: Removed session 11. Apr 24 23:38:40.987793 sshd[6302]: Accepted publickey for core from 50.85.169.122 port 54650 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:40.989879 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:40.997258 systemd-logind[1462]: New session 12 of user core. Apr 24 23:38:41.003536 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 23:38:41.234814 sshd[6302]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:41.241741 systemd[1]: sshd@11-178.105.26.190:22-50.85.169.122:54650.service: Deactivated successfully. Apr 24 23:38:41.244884 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 23:38:41.248117 systemd-logind[1462]: Session 12 logged out. Waiting for processes to exit. Apr 24 23:38:41.263859 systemd[1]: Started sshd@12-178.105.26.190:22-50.85.169.122:54654.service - OpenSSH per-connection server daemon (50.85.169.122:54654). Apr 24 23:38:41.265575 systemd-logind[1462]: Removed session 12. Apr 24 23:38:41.396776 sshd[6312]: Accepted publickey for core from 50.85.169.122 port 54654 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:41.398970 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:41.404396 systemd-logind[1462]: New session 13 of user core. Apr 24 23:38:41.413755 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 23:38:41.591713 sshd[6312]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:41.597343 systemd[1]: sshd@12-178.105.26.190:22-50.85.169.122:54654.service: Deactivated successfully. Apr 24 23:38:41.600207 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 23:38:41.602672 systemd-logind[1462]: Session 13 logged out. Waiting for processes to exit. Apr 24 23:38:41.605464 systemd-logind[1462]: Removed session 13. Apr 24 23:38:46.623683 systemd[1]: Started sshd@13-178.105.26.190:22-50.85.169.122:54668.service - OpenSSH per-connection server daemon (50.85.169.122:54668). Apr 24 23:38:46.746901 sshd[6324]: Accepted publickey for core from 50.85.169.122 port 54668 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:46.749716 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:46.754903 systemd-logind[1462]: New session 14 of user core. Apr 24 23:38:46.761648 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 23:38:46.937588 sshd[6324]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:46.944959 systemd[1]: sshd@13-178.105.26.190:22-50.85.169.122:54668.service: Deactivated successfully. Apr 24 23:38:46.947748 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 23:38:46.949066 systemd-logind[1462]: Session 14 logged out. Waiting for processes to exit. Apr 24 23:38:46.951869 systemd-logind[1462]: Removed session 14. Apr 24 23:38:46.969686 systemd[1]: Started sshd@14-178.105.26.190:22-50.85.169.122:54678.service - OpenSSH per-connection server daemon (50.85.169.122:54678). Apr 24 23:38:47.096657 sshd[6337]: Accepted publickey for core from 50.85.169.122 port 54678 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:47.099859 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:47.105592 systemd-logind[1462]: New session 15 of user core. Apr 24 23:38:47.110572 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 23:38:47.470564 sshd[6337]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:47.476900 systemd[1]: sshd@14-178.105.26.190:22-50.85.169.122:54678.service: Deactivated successfully. Apr 24 23:38:47.480751 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 23:38:47.482384 systemd-logind[1462]: Session 15 logged out. Waiting for processes to exit. Apr 24 23:38:47.483960 systemd-logind[1462]: Removed session 15. Apr 24 23:38:47.508788 systemd[1]: Started sshd@15-178.105.26.190:22-50.85.169.122:54686.service - OpenSSH per-connection server daemon (50.85.169.122:54686). Apr 24 23:38:47.640191 sshd[6348]: Accepted publickey for core from 50.85.169.122 port 54686 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:47.641582 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:47.647783 systemd-logind[1462]: New session 16 of user core. Apr 24 23:38:47.651547 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 23:38:48.477851 sshd[6348]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:48.482666 systemd-logind[1462]: Session 16 logged out. Waiting for processes to exit. Apr 24 23:38:48.484443 systemd[1]: sshd@15-178.105.26.190:22-50.85.169.122:54686.service: Deactivated successfully. Apr 24 23:38:48.487085 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 23:38:48.489505 systemd-logind[1462]: Removed session 16. Apr 24 23:38:48.509231 systemd[1]: Started sshd@16-178.105.26.190:22-50.85.169.122:54688.service - OpenSSH per-connection server daemon (50.85.169.122:54688). Apr 24 23:38:48.641787 sshd[6371]: Accepted publickey for core from 50.85.169.122 port 54688 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:48.643526 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:48.649291 systemd-logind[1462]: New session 17 of user core. Apr 24 23:38:48.656668 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 23:38:48.998013 sshd[6371]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:49.001134 systemd[1]: sshd@16-178.105.26.190:22-50.85.169.122:54688.service: Deactivated successfully. Apr 24 23:38:49.005944 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 23:38:49.007733 systemd-logind[1462]: Session 17 logged out. Waiting for processes to exit. Apr 24 23:38:49.009370 systemd-logind[1462]: Removed session 17. Apr 24 23:38:49.021466 systemd[1]: Started sshd@17-178.105.26.190:22-50.85.169.122:54700.service - OpenSSH per-connection server daemon (50.85.169.122:54700). Apr 24 23:38:49.161446 sshd[6404]: Accepted publickey for core from 50.85.169.122 port 54700 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:49.164401 sshd[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:49.172728 systemd-logind[1462]: New session 18 of user core. Apr 24 23:38:49.180545 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 23:38:49.352193 sshd[6404]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:49.359410 systemd-logind[1462]: Session 18 logged out. Waiting for processes to exit. Apr 24 23:38:49.359696 systemd[1]: sshd@17-178.105.26.190:22-50.85.169.122:54700.service: Deactivated successfully. Apr 24 23:38:49.362072 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 23:38:49.365266 systemd-logind[1462]: Removed session 18. Apr 24 23:38:52.474252 systemd[1]: run-containerd-runc-k8s.io-b961192389d69b6101091ed77fcc45f3c2a95f48dfc976f3d089f4e1ff93f746-runc.8xqJH2.mount: Deactivated successfully. Apr 24 23:38:54.394394 systemd[1]: Started sshd@18-178.105.26.190:22-50.85.169.122:60476.service - OpenSSH per-connection server daemon (50.85.169.122:60476). Apr 24 23:38:54.518823 sshd[6440]: Accepted publickey for core from 50.85.169.122 port 60476 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:54.521307 sshd[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:54.526901 systemd-logind[1462]: New session 19 of user core. Apr 24 23:38:54.536587 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 23:38:54.703680 sshd[6440]: pam_unix(sshd:session): session closed for user core Apr 24 23:38:54.711449 systemd-logind[1462]: Session 19 logged out. Waiting for processes to exit. Apr 24 23:38:54.712142 systemd[1]: sshd@18-178.105.26.190:22-50.85.169.122:60476.service: Deactivated successfully. Apr 24 23:38:54.717715 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 23:38:54.719383 systemd-logind[1462]: Removed session 19. Apr 24 23:38:59.735660 systemd[1]: Started sshd@19-178.105.26.190:22-50.85.169.122:58378.service - OpenSSH per-connection server daemon (50.85.169.122:58378). Apr 24 23:38:59.879309 sshd[6473]: Accepted publickey for core from 50.85.169.122 port 58378 ssh2: RSA SHA256:LBBtzjDnLNZXcnA2s4HQvRsYKvtzAwhHM1R/Z1hMC7M Apr 24 23:38:59.881558 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 23:38:59.886743 systemd-logind[1462]: New session 20 of user core. Apr 24 23:38:59.892674 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 24 23:39:00.070923 sshd[6473]: pam_unix(sshd:session): session closed for user core Apr 24 23:39:00.076107 systemd[1]: sshd@19-178.105.26.190:22-50.85.169.122:58378.service: Deactivated successfully. Apr 24 23:39:00.079068 systemd[1]: session-20.scope: Deactivated successfully. Apr 24 23:39:00.081456 systemd-logind[1462]: Session 20 logged out. Waiting for processes to exit. Apr 24 23:39:00.083313 systemd-logind[1462]: Removed session 20.