Apr 17 23:27:49.918521 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 17 23:27:49.918583 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Apr 17 22:13:49 -00 2026 Apr 17 23:27:49.918612 kernel: KASLR enabled Apr 17 23:27:49.918627 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 17 23:27:49.918641 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Apr 17 23:27:49.918655 kernel: random: crng init done Apr 17 23:27:49.918672 kernel: ACPI: Early table checksum verification disabled Apr 17 23:27:49.918686 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 17 23:27:49.918701 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 17 23:27:49.918719 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918735 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918749 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918763 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918778 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918796 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918815 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918831 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918846 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:27:49.918862 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 17 23:27:49.918877 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 17 23:27:49.918892 kernel: NUMA: Failed to initialise from firmware Apr 17 23:27:49.918908 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 17 23:27:49.918923 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Apr 17 23:27:49.918939 kernel: Zone ranges: Apr 17 23:27:49.918954 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 17 23:27:49.918972 kernel: DMA32 empty Apr 17 23:27:49.918988 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 17 23:27:49.919003 kernel: Movable zone start for each node Apr 17 23:27:49.919018 kernel: Early memory node ranges Apr 17 23:27:49.919034 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Apr 17 23:27:49.919071 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 17 23:27:49.919090 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 17 23:27:49.919106 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 17 23:27:49.919122 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 17 23:27:49.919137 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 17 23:27:49.919153 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 17 23:27:49.919169 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 17 23:27:49.919188 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 17 23:27:49.919204 kernel: psci: probing for conduit method from ACPI. Apr 17 23:27:49.919219 kernel: psci: PSCIv1.1 detected in firmware. Apr 17 23:27:49.919241 kernel: psci: Using standard PSCI v0.2 function IDs Apr 17 23:27:49.919258 kernel: psci: Trusted OS migration not required Apr 17 23:27:49.919274 kernel: psci: SMC Calling Convention v1.1 Apr 17 23:27:49.919294 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 17 23:27:49.919311 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Apr 17 23:27:49.919328 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Apr 17 23:27:49.919344 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 17 23:27:49.919361 kernel: Detected PIPT I-cache on CPU0 Apr 17 23:27:49.919377 kernel: CPU features: detected: GIC system register CPU interface Apr 17 23:27:49.919394 kernel: CPU features: detected: Hardware dirty bit management Apr 17 23:27:49.919410 kernel: CPU features: detected: Spectre-v4 Apr 17 23:27:49.919426 kernel: CPU features: detected: Spectre-BHB Apr 17 23:27:49.919443 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 17 23:27:49.919463 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 17 23:27:49.919480 kernel: CPU features: detected: ARM erratum 1418040 Apr 17 23:27:49.919496 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 17 23:27:49.919513 kernel: alternatives: applying boot alternatives Apr 17 23:27:49.919532 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:27:49.919549 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:27:49.919609 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:27:49.919631 kernel: Fallback order for Node 0: 0 Apr 17 23:27:49.919648 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Apr 17 23:27:49.919665 kernel: Policy zone: Normal Apr 17 23:27:49.919681 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:27:49.919702 kernel: software IO TLB: area num 2. Apr 17 23:27:49.919719 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Apr 17 23:27:49.919737 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Apr 17 23:27:49.919754 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:27:49.919771 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:27:49.919788 kernel: rcu: RCU event tracing is enabled. Apr 17 23:27:49.919835 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:27:49.919853 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:27:49.919869 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:27:49.919886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:27:49.919903 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:27:49.919919 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 17 23:27:49.919938 kernel: GICv3: 256 SPIs implemented Apr 17 23:27:49.919945 kernel: GICv3: 0 Extended SPIs implemented Apr 17 23:27:49.919952 kernel: Root IRQ handler: gic_handle_irq Apr 17 23:27:49.919959 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 17 23:27:49.919966 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 17 23:27:49.919973 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 17 23:27:49.919980 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Apr 17 23:27:49.919987 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Apr 17 23:27:49.919994 kernel: GICv3: using LPI property table @0x00000001000e0000 Apr 17 23:27:49.920001 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Apr 17 23:27:49.920008 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:27:49.920016 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 17 23:27:49.920023 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 17 23:27:49.920040 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 17 23:27:49.920079 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 17 23:27:49.920086 kernel: Console: colour dummy device 80x25 Apr 17 23:27:49.920093 kernel: ACPI: Core revision 20230628 Apr 17 23:27:49.920101 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 17 23:27:49.920108 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:27:49.920115 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:27:49.920122 kernel: landlock: Up and running. Apr 17 23:27:49.920132 kernel: SELinux: Initializing. Apr 17 23:27:49.920140 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:27:49.920147 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:27:49.920154 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:27:49.920161 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:27:49.920169 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:27:49.920176 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:27:49.920183 kernel: Platform MSI: ITS@0x8080000 domain created Apr 17 23:27:49.920190 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 17 23:27:49.920199 kernel: Remapping and enabling EFI services. Apr 17 23:27:49.920206 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:27:49.920213 kernel: Detected PIPT I-cache on CPU1 Apr 17 23:27:49.920221 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 17 23:27:49.920228 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Apr 17 23:27:49.920235 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 17 23:27:49.920242 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 17 23:27:49.920249 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:27:49.920256 kernel: SMP: Total of 2 processors activated. Apr 17 23:27:49.920265 kernel: CPU features: detected: 32-bit EL0 Support Apr 17 23:27:49.920272 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 17 23:27:49.920280 kernel: CPU features: detected: Common not Private translations Apr 17 23:27:49.920293 kernel: CPU features: detected: CRC32 instructions Apr 17 23:27:49.920302 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 17 23:27:49.920309 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 17 23:27:49.920317 kernel: CPU features: detected: LSE atomic instructions Apr 17 23:27:49.920324 kernel: CPU features: detected: Privileged Access Never Apr 17 23:27:49.920332 kernel: CPU features: detected: RAS Extension Support Apr 17 23:27:49.922160 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 17 23:27:49.922171 kernel: CPU: All CPU(s) started at EL1 Apr 17 23:27:49.922179 kernel: alternatives: applying system-wide alternatives Apr 17 23:27:49.922187 kernel: devtmpfs: initialized Apr 17 23:27:49.922195 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:27:49.922203 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:27:49.922210 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:27:49.922218 kernel: SMBIOS 3.0.0 present. Apr 17 23:27:49.922233 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 17 23:27:49.922240 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:27:49.922248 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 17 23:27:49.922256 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 17 23:27:49.922263 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 17 23:27:49.922271 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:27:49.922278 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Apr 17 23:27:49.922286 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:27:49.922293 kernel: cpuidle: using governor menu Apr 17 23:27:49.922303 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 17 23:27:49.922310 kernel: ASID allocator initialised with 32768 entries Apr 17 23:27:49.922318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:27:49.922325 kernel: Serial: AMBA PL011 UART driver Apr 17 23:27:49.922333 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 17 23:27:49.922340 kernel: Modules: 0 pages in range for non-PLT usage Apr 17 23:27:49.922348 kernel: Modules: 509008 pages in range for PLT usage Apr 17 23:27:49.922355 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:27:49.922363 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:27:49.922372 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 17 23:27:49.922380 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 17 23:27:49.922387 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:27:49.922395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:27:49.922402 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 17 23:27:49.922410 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 17 23:27:49.922417 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:27:49.922425 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:27:49.922432 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:27:49.922441 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:27:49.922449 kernel: ACPI: Interpreter enabled Apr 17 23:27:49.922456 kernel: ACPI: Using GIC for interrupt routing Apr 17 23:27:49.922464 kernel: ACPI: MCFG table detected, 1 entries Apr 17 23:27:49.922471 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 17 23:27:49.922479 kernel: printk: console [ttyAMA0] enabled Apr 17 23:27:49.922486 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:27:49.922692 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:27:49.922776 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 17 23:27:49.922845 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 17 23:27:49.922910 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 17 23:27:49.922975 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 17 23:27:49.922985 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 17 23:27:49.922993 kernel: PCI host bridge to bus 0000:00 Apr 17 23:27:49.923084 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 17 23:27:49.923153 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 17 23:27:49.923214 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 17 23:27:49.923273 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:27:49.923357 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 17 23:27:49.923435 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Apr 17 23:27:49.923505 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Apr 17 23:27:49.923586 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Apr 17 23:27:49.923674 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.923744 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Apr 17 23:27:49.923828 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.923914 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Apr 17 23:27:49.923993 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.928218 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Apr 17 23:27:49.928350 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.928421 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Apr 17 23:27:49.928507 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.928620 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Apr 17 23:27:49.928708 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.928779 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Apr 17 23:27:49.928865 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.928934 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Apr 17 23:27:49.929008 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.929098 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Apr 17 23:27:49.929177 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Apr 17 23:27:49.929248 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Apr 17 23:27:49.929335 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Apr 17 23:27:49.929403 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Apr 17 23:27:49.929485 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Apr 17 23:27:49.929554 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Apr 17 23:27:49.929637 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 17 23:27:49.929709 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 17 23:27:49.929789 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 17 23:27:49.929864 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Apr 17 23:27:49.929942 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Apr 17 23:27:49.930017 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Apr 17 23:27:49.932717 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Apr 17 23:27:49.932842 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Apr 17 23:27:49.932917 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Apr 17 23:27:49.933009 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 17 23:27:49.933106 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Apr 17 23:27:49.933183 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Apr 17 23:27:49.933267 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Apr 17 23:27:49.933343 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Apr 17 23:27:49.933420 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Apr 17 23:27:49.933508 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Apr 17 23:27:49.933595 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Apr 17 23:27:49.933671 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Apr 17 23:27:49.933745 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Apr 17 23:27:49.933821 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 17 23:27:49.933893 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 17 23:27:49.933964 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 17 23:27:49.934042 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 17 23:27:49.934175 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 17 23:27:49.934255 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 17 23:27:49.934330 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 17 23:27:49.934406 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 17 23:27:49.934476 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 17 23:27:49.934547 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 17 23:27:49.934631 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 17 23:27:49.934707 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 17 23:27:49.934779 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 17 23:27:49.934847 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 17 23:27:49.934913 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 17 23:27:49.934985 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 17 23:27:49.935064 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 17 23:27:49.935138 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 17 23:27:49.935214 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 17 23:27:49.935280 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 17 23:27:49.935347 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 17 23:27:49.935418 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 17 23:27:49.935486 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 17 23:27:49.935552 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 17 23:27:49.935678 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 17 23:27:49.936670 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 17 23:27:49.936779 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 17 23:27:49.936857 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Apr 17 23:27:49.936930 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:27:49.937003 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Apr 17 23:27:49.937097 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:27:49.937174 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Apr 17 23:27:49.937255 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:27:49.937328 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Apr 17 23:27:49.937398 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:27:49.937472 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Apr 17 23:27:49.937543 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:27:49.937665 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Apr 17 23:27:49.937752 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:27:49.937841 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Apr 17 23:27:49.937922 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:27:49.938006 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Apr 17 23:27:49.938136 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:27:49.938234 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Apr 17 23:27:49.938304 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:27:49.938381 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Apr 17 23:27:49.938455 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Apr 17 23:27:49.938528 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Apr 17 23:27:49.938615 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 17 23:27:49.938689 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Apr 17 23:27:49.938756 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 17 23:27:49.938824 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Apr 17 23:27:49.938891 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 17 23:27:49.938960 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Apr 17 23:27:49.939032 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 17 23:27:49.939117 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Apr 17 23:27:49.939186 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 17 23:27:49.939254 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Apr 17 23:27:49.939321 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 17 23:27:49.939391 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Apr 17 23:27:49.939474 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 17 23:27:49.939943 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Apr 17 23:27:49.940038 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 17 23:27:49.940433 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Apr 17 23:27:49.940512 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Apr 17 23:27:49.940633 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Apr 17 23:27:49.940722 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Apr 17 23:27:49.940794 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 17 23:27:49.940864 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Apr 17 23:27:49.940934 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 17 23:27:49.941008 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 17 23:27:49.941092 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 17 23:27:49.941160 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:27:49.941238 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Apr 17 23:27:49.941314 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 17 23:27:49.941382 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 17 23:27:49.941449 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 17 23:27:49.941517 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:27:49.941604 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Apr 17 23:27:49.941678 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Apr 17 23:27:49.941747 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 17 23:27:49.941815 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 17 23:27:49.941887 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 17 23:27:49.941953 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:27:49.942031 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Apr 17 23:27:49.942302 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 17 23:27:49.942376 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 17 23:27:49.942443 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 17 23:27:49.942509 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:27:49.942596 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Apr 17 23:27:49.942679 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Apr 17 23:27:49.942750 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 17 23:27:49.942816 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 17 23:27:49.942881 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 17 23:27:49.942945 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:27:49.943020 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Apr 17 23:27:49.947204 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Apr 17 23:27:49.947315 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 17 23:27:49.947393 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 17 23:27:49.947465 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 17 23:27:49.947534 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:27:49.947633 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Apr 17 23:27:49.947709 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Apr 17 23:27:49.947780 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Apr 17 23:27:49.947852 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 17 23:27:49.947922 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 17 23:27:49.947995 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 17 23:27:49.948080 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:27:49.948154 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 17 23:27:49.948222 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 17 23:27:49.948290 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 17 23:27:49.948358 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:27:49.948429 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 17 23:27:49.948496 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 17 23:27:49.948598 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 17 23:27:49.948684 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:27:49.948758 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 17 23:27:49.948820 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 17 23:27:49.948882 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 17 23:27:49.948964 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 17 23:27:49.949029 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 17 23:27:49.950204 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 17 23:27:49.950295 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 17 23:27:49.950358 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 17 23:27:49.950424 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 17 23:27:49.950494 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 17 23:27:49.950557 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 17 23:27:49.950675 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 17 23:27:49.950750 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 17 23:27:49.950814 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 17 23:27:49.950893 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 17 23:27:49.950965 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 17 23:27:49.951028 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 17 23:27:49.951106 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 17 23:27:49.951187 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 17 23:27:49.951250 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 17 23:27:49.951314 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 17 23:27:49.951386 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 17 23:27:49.951453 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 17 23:27:49.951515 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 17 23:27:49.951601 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 17 23:27:49.951668 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 17 23:27:49.951731 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 17 23:27:49.951807 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 17 23:27:49.951871 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 17 23:27:49.951938 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 17 23:27:49.951948 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 17 23:27:49.951956 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 17 23:27:49.951964 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 17 23:27:49.951972 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 17 23:27:49.951980 kernel: iommu: Default domain type: Translated Apr 17 23:27:49.951988 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 17 23:27:49.951996 kernel: efivars: Registered efivars operations Apr 17 23:27:49.952006 kernel: vgaarb: loaded Apr 17 23:27:49.952014 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 17 23:27:49.952022 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:27:49.952030 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:27:49.952038 kernel: pnp: PnP ACPI init Apr 17 23:27:49.952164 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 17 23:27:49.952178 kernel: pnp: PnP ACPI: found 1 devices Apr 17 23:27:49.952186 kernel: NET: Registered PF_INET protocol family Apr 17 23:27:49.952194 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:27:49.952206 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:27:49.952214 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:27:49.952223 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:27:49.952230 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:27:49.952238 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:27:49.952246 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:27:49.952254 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:27:49.952262 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:27:49.952343 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 17 23:27:49.952358 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:27:49.952366 kernel: kvm [1]: HYP mode not available Apr 17 23:27:49.952374 kernel: Initialise system trusted keyrings Apr 17 23:27:49.952382 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:27:49.952390 kernel: Key type asymmetric registered Apr 17 23:27:49.952398 kernel: Asymmetric key parser 'x509' registered Apr 17 23:27:49.952405 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 23:27:49.952413 kernel: io scheduler mq-deadline registered Apr 17 23:27:49.952421 kernel: io scheduler kyber registered Apr 17 23:27:49.952431 kernel: io scheduler bfq registered Apr 17 23:27:49.952440 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 17 23:27:49.952514 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 17 23:27:49.952599 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 17 23:27:49.952672 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.952744 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 17 23:27:49.952814 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 17 23:27:49.952886 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.952959 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 17 23:27:49.953027 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 17 23:27:49.953111 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.953186 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 17 23:27:49.953262 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 17 23:27:49.953330 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.953404 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 17 23:27:49.953473 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 17 23:27:49.953540 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.953646 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 17 23:27:49.953725 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 17 23:27:49.953795 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.953876 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 17 23:27:49.953946 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 17 23:27:49.954017 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.954101 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 17 23:27:49.954177 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 17 23:27:49.954249 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.954260 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 17 23:27:49.954333 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 17 23:27:49.954403 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 17 23:27:49.954471 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 17 23:27:49.954482 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 17 23:27:49.954493 kernel: ACPI: button: Power Button [PWRB] Apr 17 23:27:49.954502 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 17 23:27:49.954635 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 17 23:27:49.954726 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 17 23:27:49.954739 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:27:49.954748 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 17 23:27:49.954821 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 17 23:27:49.954832 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 17 23:27:49.954840 kernel: thunder_xcv, ver 1.0 Apr 17 23:27:49.954854 kernel: thunder_bgx, ver 1.0 Apr 17 23:27:49.954868 kernel: nicpf, ver 1.0 Apr 17 23:27:49.954876 kernel: nicvf, ver 1.0 Apr 17 23:27:49.954975 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 17 23:27:49.955253 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-17T23:27:49 UTC (1776468469) Apr 17 23:27:49.955270 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 17 23:27:49.955278 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 17 23:27:49.955286 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 17 23:27:49.955300 kernel: watchdog: Hard watchdog permanently disabled Apr 17 23:27:49.955308 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:27:49.955316 kernel: Segment Routing with IPv6 Apr 17 23:27:49.955324 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:27:49.955331 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:27:49.955339 kernel: Key type dns_resolver registered Apr 17 23:27:49.955347 kernel: registered taskstats version 1 Apr 17 23:27:49.955355 kernel: Loading compiled-in X.509 certificates Apr 17 23:27:49.955364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 1161289bfc8d953baa9f687fefeecf0e077bc535' Apr 17 23:27:49.955374 kernel: Key type .fscrypt registered Apr 17 23:27:49.955382 kernel: Key type fscrypt-provisioning registered Apr 17 23:27:49.955389 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:27:49.955397 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:27:49.955405 kernel: ima: No architecture policies found Apr 17 23:27:49.955413 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 17 23:27:49.955420 kernel: clk: Disabling unused clocks Apr 17 23:27:49.955429 kernel: Freeing unused kernel memory: 39424K Apr 17 23:27:49.955437 kernel: Run /init as init process Apr 17 23:27:49.955446 kernel: with arguments: Apr 17 23:27:49.955454 kernel: /init Apr 17 23:27:49.955462 kernel: with environment: Apr 17 23:27:49.955469 kernel: HOME=/ Apr 17 23:27:49.955477 kernel: TERM=linux Apr 17 23:27:49.955488 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:27:49.955498 systemd[1]: Detected virtualization kvm. Apr 17 23:27:49.955506 systemd[1]: Detected architecture arm64. Apr 17 23:27:49.955516 systemd[1]: Running in initrd. Apr 17 23:27:49.955524 systemd[1]: No hostname configured, using default hostname. Apr 17 23:27:49.955532 systemd[1]: Hostname set to . Apr 17 23:27:49.955541 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:27:49.955549 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:27:49.955558 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:27:49.955592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:27:49.955604 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:27:49.955616 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:27:49.955625 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:27:49.955635 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:27:49.955645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:27:49.955716 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:27:49.955727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:27:49.955736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:27:49.955748 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:27:49.955756 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:27:49.955765 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:27:49.955773 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:27:49.955782 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:27:49.955790 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:27:49.955798 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:27:49.955807 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:27:49.955817 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:27:49.955826 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:27:49.955834 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:27:49.955843 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:27:49.955851 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:27:49.955860 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:27:49.955868 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:27:49.955877 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:27:49.955885 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:27:49.955895 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:27:49.955904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:49.955912 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:27:49.955952 systemd-journald[236]: Collecting audit messages is disabled. Apr 17 23:27:49.955975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:27:49.955984 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:27:49.955993 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:27:49.956002 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:49.956012 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:27:49.956021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:27:49.956030 systemd-journald[236]: Journal started Apr 17 23:27:49.956063 systemd-journald[236]: Runtime Journal (/run/log/journal/24e3a3af6ebf48b29f798cb2116352d4) is 8.0M, max 76.6M, 68.6M free. Apr 17 23:27:49.934330 systemd-modules-load[237]: Inserted module 'overlay' Apr 17 23:27:49.961983 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:27:49.962019 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:27:49.962035 kernel: Bridge firewalling registered Apr 17 23:27:49.961622 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 17 23:27:49.964733 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:27:49.974386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:27:49.976336 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:27:49.980268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:27:49.991872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:27:50.000385 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:50.005136 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:27:50.006316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:27:50.014341 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:27:50.018330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:27:50.034335 dracut-cmdline[274]: dracut-dracut-053 Apr 17 23:27:50.041125 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f77c53ef012912081447488e689e924a7faa1d92b63ab5dfeba9709e9511e349 Apr 17 23:27:50.073905 systemd-resolved[276]: Positive Trust Anchors: Apr 17 23:27:50.073928 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:27:50.073959 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:27:50.080204 systemd-resolved[276]: Defaulting to hostname 'linux'. Apr 17 23:27:50.082155 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:27:50.082890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:27:50.156106 kernel: SCSI subsystem initialized Apr 17 23:27:50.161097 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:27:50.169193 kernel: iscsi: registered transport (tcp) Apr 17 23:27:50.183164 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:27:50.183305 kernel: QLogic iSCSI HBA Driver Apr 17 23:27:50.235551 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:27:50.241302 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:27:50.264158 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:27:50.264243 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:27:50.264257 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:27:50.318110 kernel: raid6: neonx8 gen() 15685 MB/s Apr 17 23:27:50.335149 kernel: raid6: neonx4 gen() 15571 MB/s Apr 17 23:27:50.352087 kernel: raid6: neonx2 gen() 13161 MB/s Apr 17 23:27:50.370059 kernel: raid6: neonx1 gen() 10407 MB/s Apr 17 23:27:50.386122 kernel: raid6: int64x8 gen() 6924 MB/s Apr 17 23:27:50.403086 kernel: raid6: int64x4 gen() 7255 MB/s Apr 17 23:27:50.420139 kernel: raid6: int64x2 gen() 6106 MB/s Apr 17 23:27:50.438872 kernel: raid6: int64x1 gen() 5034 MB/s Apr 17 23:27:50.438967 kernel: raid6: using algorithm neonx8 gen() 15685 MB/s Apr 17 23:27:50.454115 kernel: raid6: .... xor() 11882 MB/s, rmw enabled Apr 17 23:27:50.454197 kernel: raid6: using neon recovery algorithm Apr 17 23:27:50.459297 kernel: xor: measuring software checksum speed Apr 17 23:27:50.459370 kernel: 8regs : 19812 MB/sec Apr 17 23:27:50.460185 kernel: 32regs : 19660 MB/sec Apr 17 23:27:50.460228 kernel: arm64_neon : 27025 MB/sec Apr 17 23:27:50.460250 kernel: xor: using function: arm64_neon (27025 MB/sec) Apr 17 23:27:50.512125 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:27:50.528255 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:27:50.542430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:27:50.558693 systemd-udevd[458]: Using default interface naming scheme 'v255'. Apr 17 23:27:50.562460 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:27:50.570396 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:27:50.587021 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Apr 17 23:27:50.628676 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:27:50.638870 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:27:50.692088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:27:50.699522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:27:50.721449 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:27:50.722745 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:27:50.725199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:27:50.727680 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:27:50.735671 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:27:50.753830 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:27:50.812075 kernel: scsi host0: Virtio SCSI HBA Apr 17 23:27:50.818101 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:27:50.818191 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 23:27:50.827186 kernel: ACPI: bus type USB registered Apr 17 23:27:50.829071 kernel: usbcore: registered new interface driver usbfs Apr 17 23:27:50.829123 kernel: usbcore: registered new interface driver hub Apr 17 23:27:50.830182 kernel: usbcore: registered new device driver usb Apr 17 23:27:50.831815 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:27:50.831943 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:50.836894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:27:50.838009 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:27:50.838198 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:50.839964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:50.848391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:50.869572 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:50.880187 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 17 23:27:50.880444 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 17 23:27:50.881501 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 17 23:27:50.883312 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 17 23:27:50.883529 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 17 23:27:50.883149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:27:50.889259 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 17 23:27:50.889435 kernel: hub 1-0:1.0: USB hub found Apr 17 23:27:50.889568 kernel: hub 1-0:1.0: 4 ports detected Apr 17 23:27:50.892621 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 17 23:27:50.892861 kernel: hub 2-0:1.0: USB hub found Apr 17 23:27:50.893391 kernel: hub 2-0:1.0: 4 ports detected Apr 17 23:27:50.901384 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 17 23:27:50.903596 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 17 23:27:50.904008 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 17 23:27:50.905341 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 17 23:27:50.905473 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 23:27:50.911107 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 17 23:27:50.912963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:27:50.912984 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 17 23:27:50.913269 kernel: GPT:17805311 != 80003071 Apr 17 23:27:50.913282 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:27:50.913292 kernel: GPT:17805311 != 80003071 Apr 17 23:27:50.913301 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:27:50.913320 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:27:50.913330 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:27:50.914259 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 17 23:27:50.915633 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:27:50.916134 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:50.955097 kernel: BTRFS: device fsid 6218981f-ef91-4196-be05-d5f6a224b350 devid 1 transid 32 /dev/sda3 scanned by (udev-worker) (507) Apr 17 23:27:50.971069 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 23:27:50.974082 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (505) Apr 17 23:27:50.977016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 23:27:50.977773 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 23:27:50.988320 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 23:27:50.994239 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:27:51.003034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:27:51.014769 disk-uuid[574]: Primary Header is updated. Apr 17 23:27:51.014769 disk-uuid[574]: Secondary Entries is updated. Apr 17 23:27:51.014769 disk-uuid[574]: Secondary Header is updated. Apr 17 23:27:51.132902 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 17 23:27:51.272677 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 17 23:27:51.272761 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 17 23:27:51.273520 kernel: usbcore: registered new interface driver usbhid Apr 17 23:27:51.273559 kernel: usbhid: USB HID core driver Apr 17 23:27:51.378157 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 17 23:27:51.508087 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 17 23:27:51.562173 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 17 23:27:52.032099 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 23:27:52.034636 disk-uuid[576]: The operation has completed successfully. Apr 17 23:27:52.083975 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:27:52.084111 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:27:52.098347 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:27:52.117128 sh[586]: Success Apr 17 23:27:52.135099 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 17 23:27:52.187069 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:27:52.201275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:27:52.202735 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:27:52.228694 kernel: BTRFS info (device dm-0): first mount of filesystem 6218981f-ef91-4196-be05-d5f6a224b350 Apr 17 23:27:52.228761 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:52.228773 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:27:52.228784 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:27:52.229216 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:27:52.237095 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:27:52.238996 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:27:52.240745 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:27:52.247348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:27:52.252334 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:27:52.260645 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:52.260720 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:52.262120 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:27:52.269080 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:27:52.269145 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:27:52.282625 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:27:52.283894 kernel: BTRFS info (device sda6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:52.290124 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:27:52.295253 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:27:52.391096 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:27:52.398804 ignition[682]: Ignition 2.19.0 Apr 17 23:27:52.400405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:27:52.398816 ignition[682]: Stage: fetch-offline Apr 17 23:27:52.403091 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:27:52.398859 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:52.398867 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:52.399031 ignition[682]: parsed url from cmdline: "" Apr 17 23:27:52.399035 ignition[682]: no config URL provided Apr 17 23:27:52.399040 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:27:52.399076 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:27:52.399081 ignition[682]: failed to fetch config: resource requires networking Apr 17 23:27:52.399413 ignition[682]: Ignition finished successfully Apr 17 23:27:52.424893 systemd-networkd[772]: lo: Link UP Apr 17 23:27:52.424907 systemd-networkd[772]: lo: Gained carrier Apr 17 23:27:52.427536 systemd-networkd[772]: Enumeration completed Apr 17 23:27:52.427997 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:27:52.429176 systemd[1]: Reached target network.target - Network. Apr 17 23:27:52.430114 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:52.430117 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:52.430894 systemd-networkd[772]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:52.430896 systemd-networkd[772]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:52.431404 systemd-networkd[772]: eth0: Link UP Apr 17 23:27:52.431407 systemd-networkd[772]: eth0: Gained carrier Apr 17 23:27:52.431415 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:52.437351 systemd-networkd[772]: eth1: Link UP Apr 17 23:27:52.437355 systemd-networkd[772]: eth1: Gained carrier Apr 17 23:27:52.437365 systemd-networkd[772]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:52.446413 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:27:52.463185 ignition[775]: Ignition 2.19.0 Apr 17 23:27:52.463963 ignition[775]: Stage: fetch Apr 17 23:27:52.464225 ignition[775]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:52.464238 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:52.464353 ignition[775]: parsed url from cmdline: "" Apr 17 23:27:52.464356 ignition[775]: no config URL provided Apr 17 23:27:52.464361 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:27:52.464375 ignition[775]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:27:52.464397 ignition[775]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 17 23:27:52.464990 ignition[775]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 23:27:52.482159 systemd-networkd[772]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 17 23:27:52.496150 systemd-networkd[772]: eth0: DHCPv4 address 91.99.151.60/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 17 23:27:52.665717 ignition[775]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 17 23:27:52.675621 ignition[775]: GET result: OK Apr 17 23:27:52.675759 ignition[775]: parsing config with SHA512: adf343c0c3ff28c001680e44f50d4cb53e927218a1dd33d7e7574cc855ffaa0f268bf9137f97d03f3f3759245e8e59c151c45c2fc4731af6749cffbd29f97cb3 Apr 17 23:27:52.682255 unknown[775]: fetched base config from "system" Apr 17 23:27:52.682266 unknown[775]: fetched base config from "system" Apr 17 23:27:52.682762 ignition[775]: fetch: fetch complete Apr 17 23:27:52.682271 unknown[775]: fetched user config from "hetzner" Apr 17 23:27:52.682779 ignition[775]: fetch: fetch passed Apr 17 23:27:52.684509 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:27:52.682838 ignition[775]: Ignition finished successfully Apr 17 23:27:52.692409 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:27:52.708078 ignition[782]: Ignition 2.19.0 Apr 17 23:27:52.708817 ignition[782]: Stage: kargs Apr 17 23:27:52.709482 ignition[782]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:52.710037 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:52.711958 ignition[782]: kargs: kargs passed Apr 17 23:27:52.712181 ignition[782]: Ignition finished successfully Apr 17 23:27:52.714912 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:27:52.722491 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:27:52.737229 ignition[788]: Ignition 2.19.0 Apr 17 23:27:52.737240 ignition[788]: Stage: disks Apr 17 23:27:52.737424 ignition[788]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:52.737434 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:52.741197 ignition[788]: disks: disks passed Apr 17 23:27:52.741748 ignition[788]: Ignition finished successfully Apr 17 23:27:52.746097 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:27:52.747386 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:27:52.748383 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:27:52.749904 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:27:52.751676 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:27:52.753644 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:27:52.759322 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:27:52.777278 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 17 23:27:52.784111 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:27:52.792181 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:27:52.860104 kernel: EXT4-fs (sda9): mounted filesystem 2a4b2d55-130a-4cda-bef1-b1e6ed7bcf6b r/w with ordered data mode. Quota mode: none. Apr 17 23:27:52.861889 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:27:52.864898 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:27:52.878563 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:27:52.884217 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:27:52.887718 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 17 23:27:52.891896 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (804) Apr 17 23:27:52.888506 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:27:52.888580 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:27:52.897401 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:52.897445 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:52.897456 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:27:52.898387 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:27:52.901326 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:27:52.915067 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:27:52.915133 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:27:52.918708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:27:52.956610 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:27:52.962519 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:27:52.966087 coreos-metadata[806]: Apr 17 23:27:52.965 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 17 23:27:52.967875 coreos-metadata[806]: Apr 17 23:27:52.967 INFO Fetch successful Apr 17 23:27:52.970285 coreos-metadata[806]: Apr 17 23:27:52.968 INFO wrote hostname ci-4081-3-6-n-9c3210a1b0 to /sysroot/etc/hostname Apr 17 23:27:52.971203 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:27:52.973758 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:27:52.976954 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:27:53.081973 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:27:53.089189 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:27:53.094249 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:27:53.099125 kernel: BTRFS info (device sda6): last unmount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:53.127603 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:27:53.132440 ignition[922]: INFO : Ignition 2.19.0 Apr 17 23:27:53.132440 ignition[922]: INFO : Stage: mount Apr 17 23:27:53.135218 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:53.135218 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:53.135218 ignition[922]: INFO : mount: mount passed Apr 17 23:27:53.135218 ignition[922]: INFO : Ignition finished successfully Apr 17 23:27:53.136019 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:27:53.145229 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:27:53.228013 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:27:53.241644 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:27:53.252812 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (933) Apr 17 23:27:53.252885 kernel: BTRFS info (device sda6): first mount of filesystem 511634b8-962b-4ed3-9161-3f02d13492ea Apr 17 23:27:53.252909 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 17 23:27:53.253389 kernel: BTRFS info (device sda6): using free space tree Apr 17 23:27:53.257080 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 23:27:53.257140 kernel: BTRFS info (device sda6): auto enabling async discard Apr 17 23:27:53.260577 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:27:53.284358 ignition[950]: INFO : Ignition 2.19.0 Apr 17 23:27:53.284358 ignition[950]: INFO : Stage: files Apr 17 23:27:53.285780 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:53.285780 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:53.285780 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:27:53.288674 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:27:53.288674 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:27:53.291234 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:27:53.291234 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:27:53.291234 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:27:53.290359 unknown[950]: wrote ssh authorized keys file for user: core Apr 17 23:27:53.294563 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:27:53.294563 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 17 23:27:53.294563 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:27:53.294563 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 17 23:27:53.347924 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 17 23:27:53.505734 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 17 23:27:53.505734 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:27:53.508730 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 17 23:27:53.559331 systemd-networkd[772]: eth1: Gained IPv6LL Apr 17 23:27:53.833987 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 17 23:27:54.135305 systemd-networkd[772]: eth0: Gained IPv6LL Apr 17 23:27:54.536516 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 17 23:27:54.536516 ignition[950]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:27:54.545466 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:27:54.545466 ignition[950]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:27:54.545466 ignition[950]: INFO : files: files passed Apr 17 23:27:54.545466 ignition[950]: INFO : Ignition finished successfully Apr 17 23:27:54.545652 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:27:54.558266 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:27:54.563514 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:27:54.569443 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:27:54.580350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:27:54.593663 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:27:54.593663 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:27:54.597181 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:27:54.598586 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:27:54.600933 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:27:54.609282 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:27:54.645035 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:27:54.646208 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:27:54.648730 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:27:54.650474 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:27:54.651140 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:27:54.657277 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:27:54.670960 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:27:54.680404 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:27:54.692465 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:27:54.693311 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:27:54.695651 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:27:54.698513 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:27:54.698758 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:27:54.701113 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:27:54.701867 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:27:54.703098 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:27:54.704316 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:27:54.705435 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:27:54.706682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:27:54.707810 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:27:54.709085 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:27:54.710181 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:27:54.711394 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:27:54.712382 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:27:54.712519 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:27:54.713985 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:27:54.714785 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:27:54.715950 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:27:54.716026 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:27:54.717240 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:27:54.717365 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:27:54.719175 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:27:54.719298 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:27:54.720608 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:27:54.720701 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:27:54.721876 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 17 23:27:54.721972 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 17 23:27:54.729341 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:27:54.730747 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:27:54.730887 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:27:54.734297 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:27:54.736516 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:27:54.736707 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:27:54.738746 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:27:54.739106 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:27:54.749650 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:27:54.750472 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:27:54.754112 ignition[1003]: INFO : Ignition 2.19.0 Apr 17 23:27:54.754112 ignition[1003]: INFO : Stage: umount Apr 17 23:27:54.756164 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:27:54.756164 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 17 23:27:54.756164 ignition[1003]: INFO : umount: umount passed Apr 17 23:27:54.756164 ignition[1003]: INFO : Ignition finished successfully Apr 17 23:27:54.758201 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:27:54.759541 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:27:54.760979 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:27:54.761031 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:27:54.766294 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:27:54.766364 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:27:54.767154 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:27:54.767204 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:27:54.767834 systemd[1]: Stopped target network.target - Network. Apr 17 23:27:54.768411 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:27:54.768467 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:27:54.769624 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:27:54.770589 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:27:54.774135 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:27:54.776460 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:27:54.778264 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:27:54.779892 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:27:54.779977 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:27:54.781141 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:27:54.781176 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:27:54.782015 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:27:54.782075 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:27:54.783126 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:27:54.783166 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:27:54.784253 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:27:54.785208 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:27:54.787729 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:27:54.788302 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:27:54.788405 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:27:54.790298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:27:54.790378 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:27:54.794251 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:27:54.794360 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:27:54.795206 systemd-networkd[772]: eth0: DHCPv6 lease lost Apr 17 23:27:54.797753 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:27:54.797850 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:27:54.800270 systemd-networkd[772]: eth1: DHCPv6 lease lost Apr 17 23:27:54.802537 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:27:54.802666 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:27:54.804033 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:27:54.804102 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:27:54.810182 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:27:54.810841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:27:54.810909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:27:54.812294 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:27:54.812340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:27:54.813023 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:27:54.813079 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:27:54.819033 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:27:54.830503 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:27:54.830882 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:27:54.834793 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:27:54.834947 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:27:54.836397 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:27:54.836436 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:27:54.838404 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:27:54.838437 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:27:54.839612 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:27:54.839659 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:27:54.841502 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:27:54.841550 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:27:54.843075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:27:54.843118 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:27:54.849296 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:27:54.852005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:27:54.853284 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:27:54.855210 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:27:54.855262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:54.857368 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:27:54.857469 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:27:54.859693 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:27:54.865342 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:27:54.877240 systemd[1]: Switching root. Apr 17 23:27:54.909232 systemd-journald[236]: Journal stopped Apr 17 23:27:55.873624 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 17 23:27:55.873709 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:27:55.873728 kernel: SELinux: policy capability open_perms=1 Apr 17 23:27:55.873745 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:27:55.873754 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:27:55.873768 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:27:55.873784 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:27:55.873797 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:27:55.873813 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:27:55.873823 kernel: audit: type=1403 audit(1776468475.106:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:27:55.873834 systemd[1]: Successfully loaded SELinux policy in 38.602ms. Apr 17 23:27:55.873858 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.791ms. Apr 17 23:27:55.873871 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:27:55.873882 systemd[1]: Detected virtualization kvm. Apr 17 23:27:55.873893 systemd[1]: Detected architecture arm64. Apr 17 23:27:55.873903 systemd[1]: Detected first boot. Apr 17 23:27:55.873913 systemd[1]: Hostname set to . Apr 17 23:27:55.873924 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:27:55.873935 zram_generator::config[1062]: No configuration found. Apr 17 23:27:55.873946 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:27:55.873958 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:27:55.873968 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 23:27:55.873979 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:27:55.873989 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:27:55.874000 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:27:55.874009 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:27:55.874019 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:27:55.874030 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:27:55.874042 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:27:55.874065 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:27:55.874077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:27:55.874088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:27:55.874099 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:27:55.874109 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:27:55.874119 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:27:55.874130 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:27:55.874141 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 17 23:27:55.874154 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:27:55.874164 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:27:55.874174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:27:55.874188 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:27:55.874199 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:27:55.874209 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:27:55.874220 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:27:55.874232 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:27:55.874243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:27:55.874253 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:27:55.874264 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:27:55.874274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:27:55.874285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:27:55.874296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:27:55.874306 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:27:55.874316 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:27:55.874327 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:27:55.874338 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:27:55.874348 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:27:55.874358 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:27:55.874368 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:27:55.874379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:55.874389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:27:55.874399 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:27:55.874412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:55.874426 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:27:55.874438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:55.874466 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:27:55.874481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:55.874492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:27:55.874506 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 17 23:27:55.874518 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 17 23:27:55.874528 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:27:55.874538 kernel: fuse: init (API version 7.39) Apr 17 23:27:55.874548 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:27:55.874558 kernel: loop: module loaded Apr 17 23:27:55.874568 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:27:55.874579 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:27:55.874592 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:27:55.874603 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:27:55.874614 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:27:55.874624 kernel: ACPI: bus type drm_connector registered Apr 17 23:27:55.874674 systemd-journald[1140]: Collecting audit messages is disabled. Apr 17 23:27:55.874697 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:27:55.874708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:27:55.874720 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:27:55.874734 systemd-journald[1140]: Journal started Apr 17 23:27:55.874757 systemd-journald[1140]: Runtime Journal (/run/log/journal/24e3a3af6ebf48b29f798cb2116352d4) is 8.0M, max 76.6M, 68.6M free. Apr 17 23:27:55.878183 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:27:55.879120 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:27:55.881431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:27:55.883538 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:27:55.883719 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:27:55.885860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:55.886026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:55.888427 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:27:55.888660 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:27:55.889832 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:55.889989 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:55.892889 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:27:55.893146 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:27:55.894019 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:55.894335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:55.895649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:27:55.898227 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:27:55.899747 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:27:55.911415 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:27:55.922285 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:27:55.930518 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:27:55.931272 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:27:55.945316 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:27:55.950274 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:27:55.953242 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:55.967223 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:27:55.967951 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:27:55.970682 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:27:55.981275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:27:55.985798 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:27:55.988983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:27:55.994396 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:27:55.996244 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:27:55.998369 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:27:56.005183 systemd-journald[1140]: Time spent on flushing to /var/log/journal/24e3a3af6ebf48b29f798cb2116352d4 is 56.239ms for 1111 entries. Apr 17 23:27:56.005183 systemd-journald[1140]: System Journal (/var/log/journal/24e3a3af6ebf48b29f798cb2116352d4) is 8.0M, max 584.8M, 576.8M free. Apr 17 23:27:56.068044 systemd-journald[1140]: Received client request to flush runtime journal. Apr 17 23:27:56.046200 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Apr 17 23:27:56.046211 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Apr 17 23:27:56.057188 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:27:56.061392 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:27:56.064584 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:27:56.077267 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:27:56.080591 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:27:56.082204 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:27:56.106661 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:27:56.130163 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:27:56.139398 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:27:56.163370 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Apr 17 23:27:56.163713 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Apr 17 23:27:56.168592 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:27:56.497576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:27:56.503543 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:27:56.528697 systemd-udevd[1227]: Using default interface naming scheme 'v255'. Apr 17 23:27:56.555108 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:27:56.568256 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:27:56.586288 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:27:56.640479 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 17 23:27:56.656153 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:27:56.753519 systemd-networkd[1234]: lo: Link UP Apr 17 23:27:56.753529 systemd-networkd[1234]: lo: Gained carrier Apr 17 23:27:56.756831 systemd-networkd[1234]: Enumeration completed Apr 17 23:27:56.756989 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:27:56.757643 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:56.757647 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:56.759226 systemd-networkd[1234]: eth0: Link UP Apr 17 23:27:56.759243 systemd-networkd[1234]: eth0: Gained carrier Apr 17 23:27:56.759261 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:56.766143 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:27:56.773190 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:56.792071 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:27:56.820271 systemd-networkd[1234]: eth0: DHCPv4 address 91.99.151.60/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 17 23:27:56.822284 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:56.823096 systemd-networkd[1234]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:56.823106 systemd-networkd[1234]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:27:56.823796 systemd-networkd[1234]: eth1: Link UP Apr 17 23:27:56.823806 systemd-networkd[1234]: eth1: Gained carrier Apr 17 23:27:56.823823 systemd-networkd[1234]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:27:56.832429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:56.840242 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:56.850391 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:56.851061 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:27:56.851105 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:27:56.856762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:56.858829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:56.875579 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:56.875784 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:56.876194 systemd-networkd[1234]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 17 23:27:56.884998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:56.885397 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:56.886570 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:56.887083 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:27:56.902083 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1242) Apr 17 23:27:56.926085 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 17 23:27:56.926159 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 17 23:27:56.926172 kernel: [drm] features: -context_init Apr 17 23:27:56.935155 kernel: [drm] number of scanouts: 1 Apr 17 23:27:56.935228 kernel: [drm] number of cap sets: 0 Apr 17 23:27:56.948098 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Apr 17 23:27:56.958316 kernel: Console: switching to colour frame buffer device 160x50 Apr 17 23:27:56.971078 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 17 23:27:56.978364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 23:27:56.986668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:56.995349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:27:56.995658 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:57.002549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:27:57.071680 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:27:57.109795 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:27:57.118607 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:27:57.134076 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:27:57.163143 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:27:57.165396 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:27:57.171340 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:27:57.188900 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:27:57.216981 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:27:57.220298 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:27:57.223673 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:27:57.223925 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:27:57.225206 systemd[1]: Reached target machines.target - Containers. Apr 17 23:27:57.227075 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:27:57.234347 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:27:57.238339 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:27:57.240799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:57.243255 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:27:57.250308 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:27:57.256748 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:27:57.261620 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:27:57.276341 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:27:57.289225 kernel: loop0: detected capacity change from 0 to 114432 Apr 17 23:27:57.294361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:27:57.297562 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:27:57.311156 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:27:57.332101 kernel: loop1: detected capacity change from 0 to 8 Apr 17 23:27:57.350185 kernel: loop2: detected capacity change from 0 to 114328 Apr 17 23:27:57.377132 kernel: loop3: detected capacity change from 0 to 209336 Apr 17 23:27:57.415109 kernel: loop4: detected capacity change from 0 to 114432 Apr 17 23:27:57.433112 kernel: loop5: detected capacity change from 0 to 8 Apr 17 23:27:57.436072 kernel: loop6: detected capacity change from 0 to 114328 Apr 17 23:27:57.447157 kernel: loop7: detected capacity change from 0 to 209336 Apr 17 23:27:57.467665 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 17 23:27:57.468279 (sd-merge)[1322]: Merged extensions into '/usr'. Apr 17 23:27:57.476200 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:27:57.476215 systemd[1]: Reloading... Apr 17 23:27:57.581960 zram_generator::config[1353]: No configuration found. Apr 17 23:27:57.689147 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:27:57.695163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:27:57.756018 systemd[1]: Reloading finished in 278 ms. Apr 17 23:27:57.774320 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:27:57.777287 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:27:57.789352 systemd[1]: Starting ensure-sysext.service... Apr 17 23:27:57.796284 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:27:57.800255 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:27:57.800284 systemd[1]: Reloading... Apr 17 23:27:57.818853 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:27:57.819631 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:27:57.820561 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:27:57.820932 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Apr 17 23:27:57.821121 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. Apr 17 23:27:57.825453 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:27:57.825621 systemd-tmpfiles[1395]: Skipping /boot Apr 17 23:27:57.835135 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:27:57.835275 systemd-tmpfiles[1395]: Skipping /boot Apr 17 23:27:57.894127 zram_generator::config[1424]: No configuration found. Apr 17 23:27:58.007196 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:27:58.070180 systemd[1]: Reloading finished in 269 ms. Apr 17 23:27:58.091259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:27:58.103213 systemd-networkd[1234]: eth0: Gained IPv6LL Apr 17 23:27:58.117350 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:27:58.125025 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:27:58.129375 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:27:58.145625 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:27:58.152264 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:27:58.155432 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:27:58.175008 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:58.178491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:58.183908 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:58.187148 augenrules[1490]: No rules Apr 17 23:27:58.194489 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:58.197295 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:58.199036 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:27:58.204293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:58.204777 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:58.208566 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:27:58.210514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:58.210702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:58.223699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:58.231577 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:58.237400 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:27:58.238196 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:58.244422 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:27:58.256986 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:58.257187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:58.260603 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:58.260797 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:58.268026 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:27:58.280972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:27:58.283319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:27:58.289878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:27:58.300382 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:27:58.313356 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:27:58.318316 systemd-resolved[1479]: Positive Trust Anchors: Apr 17 23:27:58.318616 systemd-resolved[1479]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:27:58.318652 systemd-resolved[1479]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:27:58.324355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:27:58.324811 systemd-resolved[1479]: Using system hostname 'ci-4081-3-6-n-9c3210a1b0'. Apr 17 23:27:58.328321 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:27:58.328555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:27:58.332454 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:27:58.333848 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:27:58.335640 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:27:58.339163 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:27:58.339347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:27:58.342617 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:27:58.342797 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:27:58.344066 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:27:58.344270 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:27:58.352023 systemd[1]: Finished ensure-sysext.service. Apr 17 23:27:58.357955 systemd[1]: Reached target network.target - Network. Apr 17 23:27:58.358709 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:27:58.359374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:27:58.360215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:27:58.366272 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:27:58.367137 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:27:58.422582 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:27:58.425306 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:27:58.426812 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:27:58.427711 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:27:58.428513 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:27:58.429249 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:27:58.429285 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:27:58.429847 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:27:58.430701 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:27:58.431529 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:27:58.432259 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:27:58.434238 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:27:58.438952 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:27:58.443568 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:27:58.445282 systemd-timesyncd[1536]: Contacted time server 213.239.234.28:123 (0.flatcar.pool.ntp.org). Apr 17 23:27:58.445358 systemd-timesyncd[1536]: Initial clock synchronization to Fri 2026-04-17 23:27:58.823094 UTC. Apr 17 23:27:58.446133 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:27:58.446849 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:27:58.447513 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:27:58.448267 systemd[1]: System is tainted: cgroupsv1 Apr 17 23:27:58.448320 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:27:58.448342 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:27:58.451197 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:27:58.454762 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:27:58.461257 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:27:58.469497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:27:58.475312 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:27:58.477688 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:27:58.487790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:27:58.494314 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:27:58.499783 jq[1543]: false Apr 17 23:27:58.500585 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:27:58.508877 coreos-metadata[1541]: Apr 17 23:27:58.508 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 17 23:27:58.513090 coreos-metadata[1541]: Apr 17 23:27:58.511 INFO Fetch successful Apr 17 23:27:58.513090 coreos-metadata[1541]: Apr 17 23:27:58.511 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 17 23:27:58.513090 coreos-metadata[1541]: Apr 17 23:27:58.511 INFO Fetch successful Apr 17 23:27:58.517435 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:27:58.524040 dbus-daemon[1542]: [system] SELinux support is enabled Apr 17 23:27:58.530517 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 17 23:27:58.537484 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:27:58.548234 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:27:58.557609 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:27:58.559975 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:27:58.567531 extend-filesystems[1547]: Found loop4 Apr 17 23:27:58.567531 extend-filesystems[1547]: Found loop5 Apr 17 23:27:58.567531 extend-filesystems[1547]: Found loop6 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found loop7 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda1 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda2 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda3 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found usr Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda4 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda6 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda7 Apr 17 23:27:58.574355 extend-filesystems[1547]: Found sda9 Apr 17 23:27:58.574355 extend-filesystems[1547]: Checking size of /dev/sda9 Apr 17 23:27:58.567587 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:27:58.577780 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:27:58.581727 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:27:58.619577 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:27:58.619907 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:27:58.629669 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:27:58.629933 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:27:58.641290 extend-filesystems[1547]: Resized partition /dev/sda9 Apr 17 23:27:58.644123 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:27:58.644378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:27:58.650082 jq[1570]: true Apr 17 23:27:58.675000 extend-filesystems[1589]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:27:58.666460 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:27:58.672779 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:27:58.703119 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 17 23:27:58.720887 update_engine[1566]: I20260417 23:27:58.719761 1566 main.cc:92] Flatcar Update Engine starting Apr 17 23:27:58.726147 systemd-logind[1562]: New seat seat0. Apr 17 23:27:58.729064 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:27:58.729117 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:27:58.733458 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:27:58.733526 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:27:58.747454 systemd-networkd[1234]: eth1: Gained IPv6LL Apr 17 23:27:58.749762 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (Power Button) Apr 17 23:27:58.749779 systemd-logind[1562]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 17 23:27:58.757501 jq[1595]: true Apr 17 23:27:58.773782 update_engine[1566]: I20260417 23:27:58.758697 1566 update_check_scheduler.cc:74] Next update check in 2m29s Apr 17 23:27:58.770592 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:27:58.781094 tar[1585]: linux-arm64/LICENSE Apr 17 23:27:58.781094 tar[1585]: linux-arm64/helm Apr 17 23:27:58.781951 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:27:58.790650 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:27:58.798951 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:27:58.835603 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:27:58.842786 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:27:58.874072 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (1245) Apr 17 23:27:58.953467 bash[1635]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:27:58.956313 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:27:58.979654 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 17 23:27:58.988619 systemd[1]: Starting sshkeys.service... Apr 17 23:27:58.998084 extend-filesystems[1589]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 23:27:58.998084 extend-filesystems[1589]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 17 23:27:58.998084 extend-filesystems[1589]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 17 23:27:59.010168 extend-filesystems[1547]: Resized filesystem in /dev/sda9 Apr 17 23:27:59.010168 extend-filesystems[1547]: Found sr0 Apr 17 23:27:59.003416 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:27:59.003763 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:27:59.027362 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:27:59.042659 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:27:59.098118 coreos-metadata[1649]: Apr 17 23:27:59.097 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 17 23:27:59.101591 coreos-metadata[1649]: Apr 17 23:27:59.099 INFO Fetch successful Apr 17 23:27:59.104679 unknown[1649]: wrote ssh authorized keys file for user: core Apr 17 23:27:59.154245 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:27:59.159184 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:27:59.169718 systemd[1]: Finished sshkeys.service. Apr 17 23:27:59.187144 containerd[1590]: time="2026-04-17T23:27:59.187006872Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:27:59.261778 containerd[1590]: time="2026-04-17T23:27:59.260792568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.262978 containerd[1590]: time="2026-04-17T23:27:59.262933589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:59.263645 containerd[1590]: time="2026-04-17T23:27:59.263619029Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:27:59.263882 containerd[1590]: time="2026-04-17T23:27:59.263730664Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:27:59.265040 containerd[1590]: time="2026-04-17T23:27:59.264285618Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:27:59.265040 containerd[1590]: time="2026-04-17T23:27:59.264328304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.265040 containerd[1590]: time="2026-04-17T23:27:59.264465869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:59.265040 containerd[1590]: time="2026-04-17T23:27:59.264483212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.265611 containerd[1590]: time="2026-04-17T23:27:59.265586249Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:59.265682 containerd[1590]: time="2026-04-17T23:27:59.265669149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.266780 containerd[1590]: time="2026-04-17T23:27:59.266127254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:59.266780 containerd[1590]: time="2026-04-17T23:27:59.266147487Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.266780 containerd[1590]: time="2026-04-17T23:27:59.266255604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.266780 containerd[1590]: time="2026-04-17T23:27:59.266494919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:27:59.267854 containerd[1590]: time="2026-04-17T23:27:59.267828392Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:27:59.267981 containerd[1590]: time="2026-04-17T23:27:59.267964366Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:27:59.268295 containerd[1590]: time="2026-04-17T23:27:59.268174861Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:27:59.268295 containerd[1590]: time="2026-04-17T23:27:59.268233716Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:27:59.269719 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:27:59.274849 containerd[1590]: time="2026-04-17T23:27:59.274804398Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:27:59.275082 containerd[1590]: time="2026-04-17T23:27:59.275052468Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:27:59.276114 containerd[1590]: time="2026-04-17T23:27:59.275303178Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:27:59.276114 containerd[1590]: time="2026-04-17T23:27:59.275328605Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:27:59.276114 containerd[1590]: time="2026-04-17T23:27:59.275347162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:27:59.276114 containerd[1590]: time="2026-04-17T23:27:59.275530303Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.276800146Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277725739Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277760172Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277776090Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277790626Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277803779Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277819446Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277834610Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277851492Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277866363Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277879977Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277893088Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277934601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278082 containerd[1590]: time="2026-04-17T23:27:59.277952446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278438 containerd[1590]: time="2026-04-17T23:27:59.277965934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278438 containerd[1590]: time="2026-04-17T23:27:59.277980177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278438 containerd[1590]: time="2026-04-17T23:27:59.277992744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278438 containerd[1590]: time="2026-04-17T23:27:59.278006190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278438 containerd[1590]: time="2026-04-17T23:27:59.278018045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.278438 containerd[1590]: time="2026-04-17T23:27:59.278031785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.279185 containerd[1590]: time="2026-04-17T23:27:59.278058427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.279270 containerd[1590]: time="2026-04-17T23:27:59.279252407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.279327 containerd[1590]: time="2026-04-17T23:27:59.279315451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.279403 containerd[1590]: time="2026-04-17T23:27:59.279389930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.280174 containerd[1590]: time="2026-04-17T23:27:59.280151819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280262324Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280296841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280311670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280325242Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280448481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280467960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280483501Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280495942Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280505535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280518186Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280529663Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:27:59.283251 containerd[1590]: time="2026-04-17T23:27:59.280540303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:27:59.283114 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.280915760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.280997529Z" level=info msg="Connect containerd service" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.281197049Z" level=info msg="using legacy CRI server" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.281209784Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.281297626Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282010965Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282501827Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282543047Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282704070Z" level=info msg="Start subscribing containerd event" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282748390Z" level=info msg="Start recovering state" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282811392Z" level=info msg="Start event monitor" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282822660Z" level=info msg="Start snapshots syncer" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282832420Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:27:59.283616 containerd[1590]: time="2026-04-17T23:27:59.282841929Z" level=info msg="Start streaming server" Apr 17 23:27:59.285091 containerd[1590]: time="2026-04-17T23:27:59.284399134Z" level=info msg="containerd successfully booted in 0.101771s" Apr 17 23:27:59.723832 tar[1585]: linux-arm64/README.md Apr 17 23:27:59.742657 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:27:59.913113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:27:59.915820 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:28:00.099459 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:28:00.130131 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:28:00.140595 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:28:00.150070 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:28:00.151331 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:28:00.162461 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:28:00.176717 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:28:00.185972 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:28:00.188519 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 17 23:28:00.189699 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:28:00.190401 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:28:00.191237 systemd[1]: Startup finished in 6.212s (kernel) + 5.123s (userspace) = 11.336s. Apr 17 23:28:00.484164 kubelet[1680]: E0417 23:28:00.484082 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:28:00.487736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:28:00.488758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:28:03.627624 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:28:03.641818 systemd[1]: Started sshd@0-91.99.151.60:22-50.85.169.122:53764.service - OpenSSH per-connection server daemon (50.85.169.122:53764). Apr 17 23:28:03.776637 sshd[1713]: Accepted publickey for core from 50.85.169.122 port 53764 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:03.778765 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:03.788913 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:28:03.795590 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:28:03.801756 systemd-logind[1562]: New session 1 of user core. Apr 17 23:28:03.816359 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:28:03.832767 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:28:03.838184 (systemd)[1719]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:28:03.954038 systemd[1719]: Queued start job for default target default.target. Apr 17 23:28:03.955275 systemd[1719]: Created slice app.slice - User Application Slice. Apr 17 23:28:03.955298 systemd[1719]: Reached target paths.target - Paths. Apr 17 23:28:03.955310 systemd[1719]: Reached target timers.target - Timers. Apr 17 23:28:03.961299 systemd[1719]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:28:03.972225 systemd[1719]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:28:03.972362 systemd[1719]: Reached target sockets.target - Sockets. Apr 17 23:28:03.972380 systemd[1719]: Reached target basic.target - Basic System. Apr 17 23:28:03.972434 systemd[1719]: Reached target default.target - Main User Target. Apr 17 23:28:03.972468 systemd[1719]: Startup finished in 126ms. Apr 17 23:28:03.972614 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:28:03.980203 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:28:04.100579 systemd[1]: Started sshd@1-91.99.151.60:22-50.85.169.122:53770.service - OpenSSH per-connection server daemon (50.85.169.122:53770). Apr 17 23:28:04.222859 sshd[1731]: Accepted publickey for core from 50.85.169.122 port 53770 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:04.225191 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:04.233916 systemd-logind[1562]: New session 2 of user core. Apr 17 23:28:04.243697 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:28:04.353512 sshd[1731]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:04.359126 systemd[1]: sshd@1-91.99.151.60:22-50.85.169.122:53770.service: Deactivated successfully. Apr 17 23:28:04.363564 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:28:04.364585 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:28:04.366494 systemd-logind[1562]: Removed session 2. Apr 17 23:28:04.379575 systemd[1]: Started sshd@2-91.99.151.60:22-50.85.169.122:53784.service - OpenSSH per-connection server daemon (50.85.169.122:53784). Apr 17 23:28:04.509014 sshd[1739]: Accepted publickey for core from 50.85.169.122 port 53784 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:04.510550 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:04.515414 systemd-logind[1562]: New session 3 of user core. Apr 17 23:28:04.524717 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:28:04.626768 sshd[1739]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:04.634093 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:28:04.634727 systemd[1]: sshd@2-91.99.151.60:22-50.85.169.122:53784.service: Deactivated successfully. Apr 17 23:28:04.637856 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:28:04.639297 systemd-logind[1562]: Removed session 3. Apr 17 23:28:04.648544 systemd[1]: Started sshd@3-91.99.151.60:22-50.85.169.122:53788.service - OpenSSH per-connection server daemon (50.85.169.122:53788). Apr 17 23:28:04.787193 sshd[1747]: Accepted publickey for core from 50.85.169.122 port 53788 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:04.788416 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:04.794410 systemd-logind[1562]: New session 4 of user core. Apr 17 23:28:04.806639 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:28:04.915837 sshd[1747]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:04.923486 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:28:04.924570 systemd[1]: sshd@3-91.99.151.60:22-50.85.169.122:53788.service: Deactivated successfully. Apr 17 23:28:04.927626 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:28:04.929453 systemd-logind[1562]: Removed session 4. Apr 17 23:28:04.938643 systemd[1]: Started sshd@4-91.99.151.60:22-50.85.169.122:53802.service - OpenSSH per-connection server daemon (50.85.169.122:53802). Apr 17 23:28:05.066414 sshd[1755]: Accepted publickey for core from 50.85.169.122 port 53802 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:05.068035 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:05.073802 systemd-logind[1562]: New session 5 of user core. Apr 17 23:28:05.081654 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:28:05.182697 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:28:05.183014 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:28:05.200855 sudo[1759]: pam_unix(sudo:session): session closed for user root Apr 17 23:28:05.218671 sshd[1755]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:05.225530 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:28:05.228709 systemd[1]: sshd@4-91.99.151.60:22-50.85.169.122:53802.service: Deactivated successfully. Apr 17 23:28:05.231809 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:28:05.241593 systemd[1]: Started sshd@5-91.99.151.60:22-50.85.169.122:53804.service - OpenSSH per-connection server daemon (50.85.169.122:53804). Apr 17 23:28:05.242334 systemd-logind[1562]: Removed session 5. Apr 17 23:28:05.360027 sshd[1764]: Accepted publickey for core from 50.85.169.122 port 53804 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:05.361291 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:05.367942 systemd-logind[1562]: New session 6 of user core. Apr 17 23:28:05.379698 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:28:05.468460 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:28:05.468770 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:28:05.474851 sudo[1769]: pam_unix(sudo:session): session closed for user root Apr 17 23:28:05.482186 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:28:05.482532 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:28:05.501312 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:28:05.502911 auditctl[1772]: No rules Apr 17 23:28:05.503433 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:28:05.503694 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:28:05.508822 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:28:05.552522 augenrules[1791]: No rules Apr 17 23:28:05.555582 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:28:05.558261 sudo[1768]: pam_unix(sudo:session): session closed for user root Apr 17 23:28:05.575468 sshd[1764]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:05.580575 systemd[1]: sshd@5-91.99.151.60:22-50.85.169.122:53804.service: Deactivated successfully. Apr 17 23:28:05.585090 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:28:05.585873 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:28:05.587169 systemd-logind[1562]: Removed session 6. Apr 17 23:28:05.604287 systemd[1]: Started sshd@6-91.99.151.60:22-50.85.169.122:53810.service - OpenSSH per-connection server daemon (50.85.169.122:53810). Apr 17 23:28:05.726610 sshd[1800]: Accepted publickey for core from 50.85.169.122 port 53810 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:28:05.728269 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:28:05.735024 systemd-logind[1562]: New session 7 of user core. Apr 17 23:28:05.740722 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:28:05.829986 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:28:05.830680 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:28:06.141469 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:28:06.149897 (dockerd)[1819]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:28:06.412214 dockerd[1819]: time="2026-04-17T23:28:06.412158625Z" level=info msg="Starting up" Apr 17 23:28:06.493725 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1833752090-merged.mount: Deactivated successfully. Apr 17 23:28:06.562570 systemd[1]: var-lib-docker-metacopy\x2dcheck281490395-merged.mount: Deactivated successfully. Apr 17 23:28:06.576169 dockerd[1819]: time="2026-04-17T23:28:06.575813249Z" level=info msg="Loading containers: start." Apr 17 23:28:06.691093 kernel: Initializing XFRM netlink socket Apr 17 23:28:06.789046 systemd-networkd[1234]: docker0: Link UP Apr 17 23:28:06.812825 dockerd[1819]: time="2026-04-17T23:28:06.811701900Z" level=info msg="Loading containers: done." Apr 17 23:28:06.835554 dockerd[1819]: time="2026-04-17T23:28:06.835501247Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:28:06.835847 dockerd[1819]: time="2026-04-17T23:28:06.835827591Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:28:06.836095 dockerd[1819]: time="2026-04-17T23:28:06.836067806Z" level=info msg="Daemon has completed initialization" Apr 17 23:28:06.870658 dockerd[1819]: time="2026-04-17T23:28:06.870519406Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:28:06.871030 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:28:07.390834 containerd[1590]: time="2026-04-17T23:28:07.390416405Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:28:08.003253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495423724.mount: Deactivated successfully. Apr 17 23:28:08.925105 containerd[1590]: time="2026-04-17T23:28:08.923722026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:08.925701 containerd[1590]: time="2026-04-17T23:28:08.925158503Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 17 23:28:08.926890 containerd[1590]: time="2026-04-17T23:28:08.926842405Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:08.930987 containerd[1590]: time="2026-04-17T23:28:08.930932160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:08.932991 containerd[1590]: time="2026-04-17T23:28:08.932939958Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.542473499s" Apr 17 23:28:08.933214 containerd[1590]: time="2026-04-17T23:28:08.933191075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 17 23:28:08.933892 containerd[1590]: time="2026-04-17T23:28:08.933865561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:28:09.960127 containerd[1590]: time="2026-04-17T23:28:09.959761329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:09.961625 containerd[1590]: time="2026-04-17T23:28:09.961580297Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 17 23:28:09.962543 containerd[1590]: time="2026-04-17T23:28:09.961962387Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:09.967233 containerd[1590]: time="2026-04-17T23:28:09.967160233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:09.969105 containerd[1590]: time="2026-04-17T23:28:09.969024963Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.035010261s" Apr 17 23:28:09.969105 containerd[1590]: time="2026-04-17T23:28:09.969103325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 17 23:28:09.969848 containerd[1590]: time="2026-04-17T23:28:09.969810728Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:28:10.659995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:28:10.666653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:10.833454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:10.835382 (kubelet)[2037]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:28:10.885007 kubelet[2037]: E0417 23:28:10.884545 2037 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:28:10.891395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:28:10.891740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:28:11.091926 containerd[1590]: time="2026-04-17T23:28:11.091500664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:11.094413 containerd[1590]: time="2026-04-17T23:28:11.094359248Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 17 23:28:11.097687 containerd[1590]: time="2026-04-17T23:28:11.097611300Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:11.102330 containerd[1590]: time="2026-04-17T23:28:11.102239038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:11.104136 containerd[1590]: time="2026-04-17T23:28:11.103839725Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.133984849s" Apr 17 23:28:11.104136 containerd[1590]: time="2026-04-17T23:28:11.103888989Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 17 23:28:11.104681 containerd[1590]: time="2026-04-17T23:28:11.104556763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:28:12.015017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3847494013.mount: Deactivated successfully. Apr 17 23:28:12.350709 containerd[1590]: time="2026-04-17T23:28:12.350253409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:12.352047 containerd[1590]: time="2026-04-17T23:28:12.351975710Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 17 23:28:12.353129 containerd[1590]: time="2026-04-17T23:28:12.352803062Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:12.357762 containerd[1590]: time="2026-04-17T23:28:12.357686020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:12.359676 containerd[1590]: time="2026-04-17T23:28:12.359005717Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.254413956s" Apr 17 23:28:12.360204 containerd[1590]: time="2026-04-17T23:28:12.360180013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 17 23:28:12.360836 containerd[1590]: time="2026-04-17T23:28:12.360812112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:28:12.895277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3444893852.mount: Deactivated successfully. Apr 17 23:28:13.771424 containerd[1590]: time="2026-04-17T23:28:13.770167804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:13.773334 containerd[1590]: time="2026-04-17T23:28:13.773271934Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 17 23:28:13.774593 containerd[1590]: time="2026-04-17T23:28:13.774528905Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:13.779625 containerd[1590]: time="2026-04-17T23:28:13.779544459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:13.783200 containerd[1590]: time="2026-04-17T23:28:13.783139137Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.422289126s" Apr 17 23:28:13.783200 containerd[1590]: time="2026-04-17T23:28:13.783195464Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 17 23:28:13.783979 containerd[1590]: time="2026-04-17T23:28:13.783767159Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:28:14.220825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2564429425.mount: Deactivated successfully. Apr 17 23:28:14.231083 containerd[1590]: time="2026-04-17T23:28:14.229293247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:14.231083 containerd[1590]: time="2026-04-17T23:28:14.230936616Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:14.231083 containerd[1590]: time="2026-04-17T23:28:14.231003480Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 17 23:28:14.235683 containerd[1590]: time="2026-04-17T23:28:14.234383891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:14.235683 containerd[1590]: time="2026-04-17T23:28:14.235335318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 451.534605ms" Apr 17 23:28:14.235683 containerd[1590]: time="2026-04-17T23:28:14.235368971Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 17 23:28:14.236114 containerd[1590]: time="2026-04-17T23:28:14.236090461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:28:14.764730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412102058.mount: Deactivated successfully. Apr 17 23:28:15.554719 containerd[1590]: time="2026-04-17T23:28:15.553226372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:15.555261 containerd[1590]: time="2026-04-17T23:28:15.555229121Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 17 23:28:15.556138 containerd[1590]: time="2026-04-17T23:28:15.556112541Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:15.560082 containerd[1590]: time="2026-04-17T23:28:15.559998897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:15.561620 containerd[1590]: time="2026-04-17T23:28:15.561565949Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.325376389s" Apr 17 23:28:15.561620 containerd[1590]: time="2026-04-17T23:28:15.561607177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 17 23:28:19.002396 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:19.016570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:19.061392 systemd[1]: Reloading requested from client PID 2200 ('systemctl') (unit session-7.scope)... Apr 17 23:28:19.061415 systemd[1]: Reloading... Apr 17 23:28:19.196085 zram_generator::config[2247]: No configuration found. Apr 17 23:28:19.307522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:28:19.380696 systemd[1]: Reloading finished in 318 ms. Apr 17 23:28:19.434623 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:28:19.434896 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:28:19.435371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:19.440582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:19.571437 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:19.574471 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:28:19.618905 kubelet[2300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:28:19.618905 kubelet[2300]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:28:19.618905 kubelet[2300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:28:19.618905 kubelet[2300]: I0417 23:28:19.618462 2300 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:28:20.236675 kubelet[2300]: I0417 23:28:20.236633 2300 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:28:20.239107 kubelet[2300]: I0417 23:28:20.236850 2300 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:28:20.239107 kubelet[2300]: I0417 23:28:20.237158 2300 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:28:20.267070 kubelet[2300]: E0417 23:28:20.267002 2300 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://91.99.151.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:28:20.270075 kubelet[2300]: I0417 23:28:20.269594 2300 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:28:20.282746 kubelet[2300]: E0417 23:28:20.282703 2300 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:28:20.283009 kubelet[2300]: I0417 23:28:20.282992 2300 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:28:20.286814 kubelet[2300]: I0417 23:28:20.286775 2300 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:28:20.289218 kubelet[2300]: I0417 23:28:20.289163 2300 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:28:20.289531 kubelet[2300]: I0417 23:28:20.289340 2300 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-9c3210a1b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:28:20.289684 kubelet[2300]: I0417 23:28:20.289671 2300 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:28:20.289760 kubelet[2300]: I0417 23:28:20.289750 2300 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:28:20.290077 kubelet[2300]: I0417 23:28:20.290043 2300 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:20.293970 kubelet[2300]: I0417 23:28:20.293940 2300 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:28:20.294147 kubelet[2300]: I0417 23:28:20.294134 2300 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:28:20.294231 kubelet[2300]: I0417 23:28:20.294222 2300 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:28:20.294297 kubelet[2300]: I0417 23:28:20.294288 2300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:28:20.299155 kubelet[2300]: E0417 23:28:20.299087 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.151.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-9c3210a1b0&limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:28:20.299280 kubelet[2300]: I0417 23:28:20.299214 2300 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:28:20.300689 kubelet[2300]: I0417 23:28:20.299978 2300 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:28:20.300689 kubelet[2300]: W0417 23:28:20.300175 2300 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:28:20.305472 kubelet[2300]: I0417 23:28:20.305441 2300 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:28:20.305578 kubelet[2300]: I0417 23:28:20.305493 2300 server.go:1289] "Started kubelet" Apr 17 23:28:20.306927 kubelet[2300]: E0417 23:28:20.306902 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.151.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:28:20.307238 kubelet[2300]: I0417 23:28:20.307208 2300 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:28:20.308630 kubelet[2300]: I0417 23:28:20.308550 2300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:28:20.309017 kubelet[2300]: I0417 23:28:20.308987 2300 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:28:20.309942 kubelet[2300]: I0417 23:28:20.309918 2300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:28:20.313692 kubelet[2300]: E0417 23:28:20.312014 2300 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.151.60:6443/api/v1/namespaces/default/events\": dial tcp 91.99.151.60:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-9c3210a1b0.18a748b05b60c62f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-9c3210a1b0,UID:ci-4081-3-6-n-9c3210a1b0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-9c3210a1b0,},FirstTimestamp:2026-04-17 23:28:20.305462831 +0000 UTC m=+0.726600854,LastTimestamp:2026-04-17 23:28:20.305462831 +0000 UTC m=+0.726600854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-9c3210a1b0,}" Apr 17 23:28:20.314898 kubelet[2300]: I0417 23:28:20.314865 2300 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:28:20.315949 kubelet[2300]: I0417 23:28:20.315890 2300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:28:20.318849 kubelet[2300]: E0417 23:28:20.318816 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:20.318933 kubelet[2300]: I0417 23:28:20.318861 2300 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:28:20.319892 kubelet[2300]: I0417 23:28:20.319768 2300 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:28:20.319892 kubelet[2300]: I0417 23:28:20.319849 2300 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:28:20.321648 kubelet[2300]: E0417 23:28:20.320804 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.151.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:28:20.321648 kubelet[2300]: I0417 23:28:20.321118 2300 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:28:20.321648 kubelet[2300]: I0417 23:28:20.321209 2300 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:28:20.322664 kubelet[2300]: E0417 23:28:20.322629 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.151.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9c3210a1b0?timeout=10s\": dial tcp 91.99.151.60:6443: connect: connection refused" interval="200ms" Apr 17 23:28:20.323028 kubelet[2300]: E0417 23:28:20.322965 2300 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:28:20.323312 kubelet[2300]: I0417 23:28:20.323294 2300 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:28:20.345845 kubelet[2300]: I0417 23:28:20.345806 2300 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:28:20.348349 kubelet[2300]: I0417 23:28:20.348314 2300 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:28:20.348509 kubelet[2300]: I0417 23:28:20.348498 2300 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:28:20.348584 kubelet[2300]: I0417 23:28:20.348573 2300 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:28:20.348632 kubelet[2300]: I0417 23:28:20.348624 2300 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:28:20.348748 kubelet[2300]: E0417 23:28:20.348728 2300 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:28:20.353779 kubelet[2300]: E0417 23:28:20.353738 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.151.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:28:20.355043 kubelet[2300]: I0417 23:28:20.355017 2300 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:28:20.355219 kubelet[2300]: I0417 23:28:20.355205 2300 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:28:20.355344 kubelet[2300]: I0417 23:28:20.355334 2300 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:20.357898 kubelet[2300]: I0417 23:28:20.357871 2300 policy_none.go:49] "None policy: Start" Apr 17 23:28:20.358338 kubelet[2300]: I0417 23:28:20.358063 2300 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:28:20.358338 kubelet[2300]: I0417 23:28:20.358099 2300 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:28:20.363577 kubelet[2300]: E0417 23:28:20.363540 2300 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:28:20.365097 kubelet[2300]: I0417 23:28:20.363941 2300 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:28:20.365097 kubelet[2300]: I0417 23:28:20.363962 2300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:28:20.366342 kubelet[2300]: I0417 23:28:20.366313 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:28:20.369910 kubelet[2300]: E0417 23:28:20.369884 2300 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:28:20.370150 kubelet[2300]: E0417 23:28:20.370133 2300 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:20.464329 kubelet[2300]: E0417 23:28:20.464283 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.472328 kubelet[2300]: E0417 23:28:20.472286 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.475608 kubelet[2300]: E0417 23:28:20.475573 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.476089 kubelet[2300]: I0417 23:28:20.476041 2300 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.476599 kubelet[2300]: E0417 23:28:20.476570 2300 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.151.60:6443/api/v1/nodes\": dial tcp 91.99.151.60:6443: connect: connection refused" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.521531 kubelet[2300]: I0417 23:28:20.521310 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.521531 kubelet[2300]: I0417 23:28:20.521396 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab15dd41dae646881d4afad3049c32d3-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-9c3210a1b0\" (UID: \"ab15dd41dae646881d4afad3049c32d3\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.521531 kubelet[2300]: I0417 23:28:20.521435 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da1167cf3ad0bd910da87ef9ea134954-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" (UID: \"da1167cf3ad0bd910da87ef9ea134954\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.521531 kubelet[2300]: I0417 23:28:20.521468 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.521531 kubelet[2300]: I0417 23:28:20.521507 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.522097 kubelet[2300]: I0417 23:28:20.521537 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.522097 kubelet[2300]: I0417 23:28:20.521595 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.522097 kubelet[2300]: I0417 23:28:20.521632 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da1167cf3ad0bd910da87ef9ea134954-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" (UID: \"da1167cf3ad0bd910da87ef9ea134954\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.522097 kubelet[2300]: I0417 23:28:20.521678 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da1167cf3ad0bd910da87ef9ea134954-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" (UID: \"da1167cf3ad0bd910da87ef9ea134954\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.524543 kubelet[2300]: E0417 23:28:20.524498 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.151.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9c3210a1b0?timeout=10s\": dial tcp 91.99.151.60:6443: connect: connection refused" interval="400ms" Apr 17 23:28:20.680072 kubelet[2300]: I0417 23:28:20.679947 2300 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.680706 kubelet[2300]: E0417 23:28:20.680464 2300 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.151.60:6443/api/v1/nodes\": dial tcp 91.99.151.60:6443: connect: connection refused" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:20.767369 containerd[1590]: time="2026-04-17T23:28:20.766789426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-9c3210a1b0,Uid:da1167cf3ad0bd910da87ef9ea134954,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:20.774446 containerd[1590]: time="2026-04-17T23:28:20.774244412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-9c3210a1b0,Uid:1edf51fbb0f82a9a32441af291321e8a,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:20.777890 containerd[1590]: time="2026-04-17T23:28:20.777585102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-9c3210a1b0,Uid:ab15dd41dae646881d4afad3049c32d3,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:20.925409 kubelet[2300]: E0417 23:28:20.925288 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.151.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9c3210a1b0?timeout=10s\": dial tcp 91.99.151.60:6443: connect: connection refused" interval="800ms" Apr 17 23:28:21.083621 kubelet[2300]: I0417 23:28:21.083494 2300 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:21.084524 kubelet[2300]: E0417 23:28:21.084480 2300 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.151.60:6443/api/v1/nodes\": dial tcp 91.99.151.60:6443: connect: connection refused" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:21.174911 kubelet[2300]: E0417 23:28:21.174839 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.151.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:28:21.213356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount278637213.mount: Deactivated successfully. Apr 17 23:28:21.221759 containerd[1590]: time="2026-04-17T23:28:21.221688192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:21.225440 containerd[1590]: time="2026-04-17T23:28:21.225387108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Apr 17 23:28:21.227090 containerd[1590]: time="2026-04-17T23:28:21.226315506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:21.227751 containerd[1590]: time="2026-04-17T23:28:21.227712675Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:21.229830 containerd[1590]: time="2026-04-17T23:28:21.229785330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:21.231277 containerd[1590]: time="2026-04-17T23:28:21.231243170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:28:21.232229 containerd[1590]: time="2026-04-17T23:28:21.232181834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:28:21.233683 containerd[1590]: time="2026-04-17T23:28:21.233646531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:28:21.237457 containerd[1590]: time="2026-04-17T23:28:21.237406520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 463.00755ms" Apr 17 23:28:21.240661 containerd[1590]: time="2026-04-17T23:28:21.240580525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.665306ms" Apr 17 23:28:21.247507 containerd[1590]: time="2026-04-17T23:28:21.247438970Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.757397ms" Apr 17 23:28:21.372998 containerd[1590]: time="2026-04-17T23:28:21.372668782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:21.372998 containerd[1590]: time="2026-04-17T23:28:21.372752471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:21.372998 containerd[1590]: time="2026-04-17T23:28:21.372769714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:21.373726 containerd[1590]: time="2026-04-17T23:28:21.373406383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:21.376110 containerd[1590]: time="2026-04-17T23:28:21.375951178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:21.376110 containerd[1590]: time="2026-04-17T23:28:21.376016100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:21.376110 containerd[1590]: time="2026-04-17T23:28:21.376032300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:21.376424 containerd[1590]: time="2026-04-17T23:28:21.376385983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:21.383551 containerd[1590]: time="2026-04-17T23:28:21.383270133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:21.383922 containerd[1590]: time="2026-04-17T23:28:21.383819103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:21.384304 containerd[1590]: time="2026-04-17T23:28:21.384245087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:21.386808 containerd[1590]: time="2026-04-17T23:28:21.386718383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:21.410234 kubelet[2300]: E0417 23:28:21.410182 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.151.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:28:21.415660 kubelet[2300]: E0417 23:28:21.415604 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.151.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-9c3210a1b0&limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:28:21.460772 containerd[1590]: time="2026-04-17T23:28:21.460322648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-9c3210a1b0,Uid:da1167cf3ad0bd910da87ef9ea134954,Namespace:kube-system,Attempt:0,} returns sandbox id \"24a6786f97bcfc974e822380aaaf8e069d6a0d100048fe672883f6026d112f0b\"" Apr 17 23:28:21.471094 containerd[1590]: time="2026-04-17T23:28:21.470460723Z" level=info msg="CreateContainer within sandbox \"24a6786f97bcfc974e822380aaaf8e069d6a0d100048fe672883f6026d112f0b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:28:21.472193 containerd[1590]: time="2026-04-17T23:28:21.471510985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-9c3210a1b0,Uid:1edf51fbb0f82a9a32441af291321e8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fa1e17eca7ebb5f902da8bc8698fe99c971a828e33a2c98cadd563aa61a0abc\"" Apr 17 23:28:21.473768 containerd[1590]: time="2026-04-17T23:28:21.473627069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-9c3210a1b0,Uid:ab15dd41dae646881d4afad3049c32d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba115955bdab936650532c18341facfcb653bfe67ad05c830d3cc6f05ad0552c\"" Apr 17 23:28:21.480238 containerd[1590]: time="2026-04-17T23:28:21.480024082Z" level=info msg="CreateContainer within sandbox \"6fa1e17eca7ebb5f902da8bc8698fe99c971a828e33a2c98cadd563aa61a0abc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:28:21.482215 containerd[1590]: time="2026-04-17T23:28:21.482035143Z" level=info msg="CreateContainer within sandbox \"ba115955bdab936650532c18341facfcb653bfe67ad05c830d3cc6f05ad0552c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:28:21.497668 kubelet[2300]: E0417 23:28:21.497630 2300 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.151.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.151.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:28:21.498383 containerd[1590]: time="2026-04-17T23:28:21.497967445Z" level=info msg="CreateContainer within sandbox \"24a6786f97bcfc974e822380aaaf8e069d6a0d100048fe672883f6026d112f0b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"65d9194aa2aa83143f03cd6bef41bc389383b876f40d2813359f69712e4629af\"" Apr 17 23:28:21.499711 containerd[1590]: time="2026-04-17T23:28:21.499673465Z" level=info msg="StartContainer for \"65d9194aa2aa83143f03cd6bef41bc389383b876f40d2813359f69712e4629af\"" Apr 17 23:28:21.502707 containerd[1590]: time="2026-04-17T23:28:21.502659922Z" level=info msg="CreateContainer within sandbox \"6fa1e17eca7ebb5f902da8bc8698fe99c971a828e33a2c98cadd563aa61a0abc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207\"" Apr 17 23:28:21.504699 containerd[1590]: time="2026-04-17T23:28:21.503428241Z" level=info msg="StartContainer for \"577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207\"" Apr 17 23:28:21.509140 containerd[1590]: time="2026-04-17T23:28:21.509093587Z" level=info msg="CreateContainer within sandbox \"ba115955bdab936650532c18341facfcb653bfe67ad05c830d3cc6f05ad0552c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"21c669859809e7a8c28bfb34a32cf7421d5b50cfc1f52d9858edea90af79f337\"" Apr 17 23:28:21.510589 containerd[1590]: time="2026-04-17T23:28:21.510542364Z" level=info msg="StartContainer for \"21c669859809e7a8c28bfb34a32cf7421d5b50cfc1f52d9858edea90af79f337\"" Apr 17 23:28:21.615860 containerd[1590]: time="2026-04-17T23:28:21.615809250Z" level=info msg="StartContainer for \"65d9194aa2aa83143f03cd6bef41bc389383b876f40d2813359f69712e4629af\" returns successfully" Apr 17 23:28:21.632635 containerd[1590]: time="2026-04-17T23:28:21.632467645Z" level=info msg="StartContainer for \"577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207\" returns successfully" Apr 17 23:28:21.635167 containerd[1590]: time="2026-04-17T23:28:21.635117140Z" level=info msg="StartContainer for \"21c669859809e7a8c28bfb34a32cf7421d5b50cfc1f52d9858edea90af79f337\" returns successfully" Apr 17 23:28:21.727453 kubelet[2300]: E0417 23:28:21.727395 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.151.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-9c3210a1b0?timeout=10s\": dial tcp 91.99.151.60:6443: connect: connection refused" interval="1.6s" Apr 17 23:28:21.890167 kubelet[2300]: I0417 23:28:21.888591 2300 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:22.370688 kubelet[2300]: E0417 23:28:22.370635 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:22.374908 kubelet[2300]: E0417 23:28:22.374871 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:22.382236 kubelet[2300]: E0417 23:28:22.382167 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:23.386070 kubelet[2300]: E0417 23:28:23.385951 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:23.387133 kubelet[2300]: E0417 23:28:23.386603 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:23.659833 kubelet[2300]: E0417 23:28:23.659764 2300 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:23.749719 kubelet[2300]: I0417 23:28:23.749680 2300 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:23.749719 kubelet[2300]: E0417 23:28:23.749725 2300 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-9c3210a1b0\": node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:23.762120 kubelet[2300]: E0417 23:28:23.762068 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:23.862682 kubelet[2300]: E0417 23:28:23.862607 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:23.963884 kubelet[2300]: E0417 23:28:23.963722 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.065215 kubelet[2300]: E0417 23:28:24.064964 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.165745 kubelet[2300]: E0417 23:28:24.165583 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.176852 kubelet[2300]: E0417 23:28:24.176637 2300 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:24.267130 kubelet[2300]: E0417 23:28:24.266449 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.367534 kubelet[2300]: E0417 23:28:24.367421 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.468654 kubelet[2300]: E0417 23:28:24.468343 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.569232 kubelet[2300]: E0417 23:28:24.569035 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-9c3210a1b0\" not found" Apr 17 23:28:24.622712 kubelet[2300]: I0417 23:28:24.622646 2300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:24.634343 kubelet[2300]: I0417 23:28:24.634299 2300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:24.639755 kubelet[2300]: I0417 23:28:24.639696 2300 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:25.312985 kubelet[2300]: I0417 23:28:25.312929 2300 apiserver.go:52] "Watching apiserver" Apr 17 23:28:25.321109 kubelet[2300]: I0417 23:28:25.321028 2300 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:28:26.041039 systemd[1]: Reloading requested from client PID 2590 ('systemctl') (unit session-7.scope)... Apr 17 23:28:26.041069 systemd[1]: Reloading... Apr 17 23:28:26.161082 zram_generator::config[2639]: No configuration found. Apr 17 23:28:26.267705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:28:26.345959 systemd[1]: Reloading finished in 304 ms. Apr 17 23:28:26.382509 kubelet[2300]: I0417 23:28:26.382334 2300 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:28:26.382729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:26.398857 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:28:26.399337 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:26.407716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:28:26.551420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:28:26.566748 (kubelet)[2685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:28:26.632652 kubelet[2685]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:28:26.632652 kubelet[2685]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:28:26.632652 kubelet[2685]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:28:26.632652 kubelet[2685]: I0417 23:28:26.632594 2685 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:28:26.647113 kubelet[2685]: I0417 23:28:26.647063 2685 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:28:26.647113 kubelet[2685]: I0417 23:28:26.647114 2685 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:28:26.647418 kubelet[2685]: I0417 23:28:26.647379 2685 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:28:26.649345 kubelet[2685]: I0417 23:28:26.649281 2685 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:28:26.652257 kubelet[2685]: I0417 23:28:26.652014 2685 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:28:26.656730 kubelet[2685]: E0417 23:28:26.656688 2685 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:28:26.657288 kubelet[2685]: I0417 23:28:26.656955 2685 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:28:26.660006 kubelet[2685]: I0417 23:28:26.659977 2685 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:28:26.660812 kubelet[2685]: I0417 23:28:26.660772 2685 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:28:26.661263 kubelet[2685]: I0417 23:28:26.660920 2685 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-9c3210a1b0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 17 23:28:26.661263 kubelet[2685]: I0417 23:28:26.661155 2685 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:28:26.661263 kubelet[2685]: I0417 23:28:26.661166 2685 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:28:26.661263 kubelet[2685]: I0417 23:28:26.661225 2685 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:26.661839 kubelet[2685]: I0417 23:28:26.661689 2685 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:28:26.661839 kubelet[2685]: I0417 23:28:26.661714 2685 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:28:26.661839 kubelet[2685]: I0417 23:28:26.661768 2685 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:28:26.661839 kubelet[2685]: I0417 23:28:26.661786 2685 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:28:26.668091 kubelet[2685]: I0417 23:28:26.666497 2685 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:28:26.668091 kubelet[2685]: I0417 23:28:26.667247 2685 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:28:26.675082 kubelet[2685]: I0417 23:28:26.675029 2685 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:28:26.675344 kubelet[2685]: I0417 23:28:26.675328 2685 server.go:1289] "Started kubelet" Apr 17 23:28:26.682205 kubelet[2685]: I0417 23:28:26.682162 2685 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:28:26.698081 kubelet[2685]: I0417 23:28:26.696581 2685 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:28:26.701966 kubelet[2685]: I0417 23:28:26.699711 2685 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:28:26.706420 kubelet[2685]: I0417 23:28:26.706357 2685 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:28:26.706634 kubelet[2685]: I0417 23:28:26.706613 2685 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:28:26.706886 kubelet[2685]: I0417 23:28:26.706859 2685 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:28:26.709191 kubelet[2685]: I0417 23:28:26.709156 2685 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:28:26.722649 kubelet[2685]: I0417 23:28:26.722605 2685 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:28:26.722849 kubelet[2685]: I0417 23:28:26.722822 2685 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:28:26.723403 kubelet[2685]: I0417 23:28:26.723381 2685 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:28:26.723666 kubelet[2685]: I0417 23:28:26.723639 2685 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:28:26.728155 kubelet[2685]: I0417 23:28:26.728127 2685 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:28:26.731591 kubelet[2685]: I0417 23:28:26.731541 2685 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:28:26.732671 kubelet[2685]: I0417 23:28:26.732639 2685 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:28:26.732671 kubelet[2685]: I0417 23:28:26.732664 2685 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:28:26.732784 kubelet[2685]: I0417 23:28:26.732685 2685 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:28:26.732784 kubelet[2685]: I0417 23:28:26.732692 2685 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:28:26.732784 kubelet[2685]: E0417 23:28:26.732730 2685 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:28:26.794943 kubelet[2685]: I0417 23:28:26.794904 2685 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:28:26.794943 kubelet[2685]: I0417 23:28:26.794924 2685 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:28:26.794943 kubelet[2685]: I0417 23:28:26.794945 2685 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:28:26.795236 kubelet[2685]: I0417 23:28:26.795107 2685 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:28:26.795236 kubelet[2685]: I0417 23:28:26.795121 2685 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:28:26.795236 kubelet[2685]: I0417 23:28:26.795139 2685 policy_none.go:49] "None policy: Start" Apr 17 23:28:26.795236 kubelet[2685]: I0417 23:28:26.795148 2685 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:28:26.795236 kubelet[2685]: I0417 23:28:26.795157 2685 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:28:26.795236 kubelet[2685]: I0417 23:28:26.795240 2685 state_mem.go:75] "Updated machine memory state" Apr 17 23:28:26.796532 kubelet[2685]: E0417 23:28:26.796488 2685 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:28:26.796722 kubelet[2685]: I0417 23:28:26.796701 2685 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:28:26.796757 kubelet[2685]: I0417 23:28:26.796722 2685 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:28:26.800418 kubelet[2685]: I0417 23:28:26.800113 2685 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:28:26.805085 kubelet[2685]: E0417 23:28:26.804627 2685 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:28:26.834490 kubelet[2685]: I0417 23:28:26.834407 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.835309 kubelet[2685]: I0417 23:28:26.835038 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.835309 kubelet[2685]: I0417 23:28:26.835241 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.843902 kubelet[2685]: E0417 23:28:26.843824 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.845587 kubelet[2685]: E0417 23:28:26.845454 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.845587 kubelet[2685]: E0417 23:28:26.845506 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-9c3210a1b0\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.909198 kubelet[2685]: I0417 23:28:26.907533 2685 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.922266 kubelet[2685]: I0417 23:28:26.921869 2685 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.922266 kubelet[2685]: I0417 23:28:26.922001 2685 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.923430 kubelet[2685]: I0417 23:28:26.923131 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.923430 kubelet[2685]: I0417 23:28:26.923299 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.923430 kubelet[2685]: I0417 23:28:26.923324 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:26.923430 kubelet[2685]: I0417 23:28:26.923343 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.024720 kubelet[2685]: I0417 23:28:27.024409 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1edf51fbb0f82a9a32441af291321e8a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-9c3210a1b0\" (UID: \"1edf51fbb0f82a9a32441af291321e8a\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.024720 kubelet[2685]: I0417 23:28:27.024453 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab15dd41dae646881d4afad3049c32d3-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-9c3210a1b0\" (UID: \"ab15dd41dae646881d4afad3049c32d3\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.024720 kubelet[2685]: I0417 23:28:27.024469 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/da1167cf3ad0bd910da87ef9ea134954-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" (UID: \"da1167cf3ad0bd910da87ef9ea134954\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.024720 kubelet[2685]: I0417 23:28:27.024515 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/da1167cf3ad0bd910da87ef9ea134954-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" (UID: \"da1167cf3ad0bd910da87ef9ea134954\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.024720 kubelet[2685]: I0417 23:28:27.024534 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/da1167cf3ad0bd910da87ef9ea134954-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" (UID: \"da1167cf3ad0bd910da87ef9ea134954\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.679183 kubelet[2685]: I0417 23:28:27.679135 2685 apiserver.go:52] "Watching apiserver" Apr 17 23:28:27.723286 kubelet[2685]: I0417 23:28:27.723226 2685 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:28:27.769086 kubelet[2685]: I0417 23:28:27.766177 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.769086 kubelet[2685]: I0417 23:28:27.766590 2685 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.784025 kubelet[2685]: E0417 23:28:27.783505 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-9c3210a1b0\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.784025 kubelet[2685]: E0417 23:28:27.783738 2685 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-9c3210a1b0\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" Apr 17 23:28:27.804266 kubelet[2685]: I0417 23:28:27.804204 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-9c3210a1b0" podStartSLOduration=3.8041664490000002 podStartE2EDuration="3.804166449s" podCreationTimestamp="2026-04-17 23:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:27.803893062 +0000 UTC m=+1.228648358" watchObservedRunningTime="2026-04-17 23:28:27.804166449 +0000 UTC m=+1.228921745" Apr 17 23:28:27.841666 kubelet[2685]: I0417 23:28:27.837723 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-9c3210a1b0" podStartSLOduration=3.83770084 podStartE2EDuration="3.83770084s" podCreationTimestamp="2026-04-17 23:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:27.818661755 +0000 UTC m=+1.243417051" watchObservedRunningTime="2026-04-17 23:28:27.83770084 +0000 UTC m=+1.262456136" Apr 17 23:28:27.845905 kubelet[2685]: I0417 23:28:27.845764 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-9c3210a1b0" podStartSLOduration=3.845713311 podStartE2EDuration="3.845713311s" podCreationTimestamp="2026-04-17 23:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:27.83743374 +0000 UTC m=+1.262189036" watchObservedRunningTime="2026-04-17 23:28:27.845713311 +0000 UTC m=+1.270468647" Apr 17 23:28:32.491740 kubelet[2685]: I0417 23:28:32.491700 2685 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:28:32.492788 containerd[1590]: time="2026-04-17T23:28:32.492293366Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:28:32.493822 kubelet[2685]: I0417 23:28:32.493364 2685 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:28:33.061889 kubelet[2685]: I0417 23:28:33.061572 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfbbdae1-2914-4515-9b6a-d1d036a3d767-xtables-lock\") pod \"kube-proxy-lzgfr\" (UID: \"bfbbdae1-2914-4515-9b6a-d1d036a3d767\") " pod="kube-system/kube-proxy-lzgfr" Apr 17 23:28:33.061889 kubelet[2685]: I0417 23:28:33.061686 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bfbbdae1-2914-4515-9b6a-d1d036a3d767-kube-proxy\") pod \"kube-proxy-lzgfr\" (UID: \"bfbbdae1-2914-4515-9b6a-d1d036a3d767\") " pod="kube-system/kube-proxy-lzgfr" Apr 17 23:28:33.061889 kubelet[2685]: I0417 23:28:33.061747 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfbbdae1-2914-4515-9b6a-d1d036a3d767-lib-modules\") pod \"kube-proxy-lzgfr\" (UID: \"bfbbdae1-2914-4515-9b6a-d1d036a3d767\") " pod="kube-system/kube-proxy-lzgfr" Apr 17 23:28:33.061889 kubelet[2685]: I0417 23:28:33.061784 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25qdb\" (UniqueName: \"kubernetes.io/projected/bfbbdae1-2914-4515-9b6a-d1d036a3d767-kube-api-access-25qdb\") pod \"kube-proxy-lzgfr\" (UID: \"bfbbdae1-2914-4515-9b6a-d1d036a3d767\") " pod="kube-system/kube-proxy-lzgfr" Apr 17 23:28:33.175129 kubelet[2685]: E0417 23:28:33.174215 2685 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 17 23:28:33.175129 kubelet[2685]: E0417 23:28:33.174248 2685 projected.go:194] Error preparing data for projected volume kube-api-access-25qdb for pod kube-system/kube-proxy-lzgfr: configmap "kube-root-ca.crt" not found Apr 17 23:28:33.175129 kubelet[2685]: E0417 23:28:33.174318 2685 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bfbbdae1-2914-4515-9b6a-d1d036a3d767-kube-api-access-25qdb podName:bfbbdae1-2914-4515-9b6a-d1d036a3d767 nodeName:}" failed. No retries permitted until 2026-04-17 23:28:33.674291432 +0000 UTC m=+7.099046728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25qdb" (UniqueName: "kubernetes.io/projected/bfbbdae1-2914-4515-9b6a-d1d036a3d767-kube-api-access-25qdb") pod "kube-proxy-lzgfr" (UID: "bfbbdae1-2914-4515-9b6a-d1d036a3d767") : configmap "kube-root-ca.crt" not found Apr 17 23:28:33.669453 kubelet[2685]: I0417 23:28:33.669250 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50273e7a-c327-4f0a-9d11-9c3d1978a90d-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-c2z2n\" (UID: \"50273e7a-c327-4f0a-9d11-9c3d1978a90d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c2z2n" Apr 17 23:28:33.669453 kubelet[2685]: I0417 23:28:33.669356 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8btk6\" (UniqueName: \"kubernetes.io/projected/50273e7a-c327-4f0a-9d11-9c3d1978a90d-kube-api-access-8btk6\") pod \"tigera-operator-6bf85f8dd-c2z2n\" (UID: \"50273e7a-c327-4f0a-9d11-9c3d1978a90d\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c2z2n" Apr 17 23:28:33.841527 containerd[1590]: time="2026-04-17T23:28:33.840781181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzgfr,Uid:bfbbdae1-2914-4515-9b6a-d1d036a3d767,Namespace:kube-system,Attempt:0,}" Apr 17 23:28:33.875727 containerd[1590]: time="2026-04-17T23:28:33.875564721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:33.875727 containerd[1590]: time="2026-04-17T23:28:33.875659931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:33.875727 containerd[1590]: time="2026-04-17T23:28:33.875682193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:33.876222 containerd[1590]: time="2026-04-17T23:28:33.875800425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:33.921354 containerd[1590]: time="2026-04-17T23:28:33.921158880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lzgfr,Uid:bfbbdae1-2914-4515-9b6a-d1d036a3d767,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2ce0aa179baa68b887370e25e1fc5bb88cf3f95607861f384cf8d160e871a8\"" Apr 17 23:28:33.925875 containerd[1590]: time="2026-04-17T23:28:33.925379020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c2z2n,Uid:50273e7a-c327-4f0a-9d11-9c3d1978a90d,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:28:33.929605 containerd[1590]: time="2026-04-17T23:28:33.929326341Z" level=info msg="CreateContainer within sandbox \"7b2ce0aa179baa68b887370e25e1fc5bb88cf3f95607861f384cf8d160e871a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:28:33.956035 containerd[1590]: time="2026-04-17T23:28:33.955948345Z" level=info msg="CreateContainer within sandbox \"7b2ce0aa179baa68b887370e25e1fc5bb88cf3f95607861f384cf8d160e871a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cde6fd88b102ddaf1960c6b1e1a399ac97b1a3379bf41fb5c57101f82bd28e2c\"" Apr 17 23:28:33.957679 containerd[1590]: time="2026-04-17T23:28:33.957518481Z" level=info msg="StartContainer for \"cde6fd88b102ddaf1960c6b1e1a399ac97b1a3379bf41fb5c57101f82bd28e2c\"" Apr 17 23:28:33.970502 containerd[1590]: time="2026-04-17T23:28:33.970252693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:33.970815 containerd[1590]: time="2026-04-17T23:28:33.970462373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:33.970815 containerd[1590]: time="2026-04-17T23:28:33.970483513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:33.970815 containerd[1590]: time="2026-04-17T23:28:33.970600825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:34.042077 containerd[1590]: time="2026-04-17T23:28:34.041401858Z" level=info msg="StartContainer for \"cde6fd88b102ddaf1960c6b1e1a399ac97b1a3379bf41fb5c57101f82bd28e2c\" returns successfully" Apr 17 23:28:34.042480 containerd[1590]: time="2026-04-17T23:28:34.042331255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c2z2n,Uid:50273e7a-c327-4f0a-9d11-9c3d1978a90d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d54ee83239d7fd7940f05ad55fda39ed491711a19c48053351a45b1fe3a0e09\"" Apr 17 23:28:34.046931 containerd[1590]: time="2026-04-17T23:28:34.046775217Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:28:34.810339 kubelet[2685]: I0417 23:28:34.810242 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lzgfr" podStartSLOduration=2.810206443 podStartE2EDuration="2.810206443s" podCreationTimestamp="2026-04-17 23:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:28:34.807864133 +0000 UTC m=+8.232619469" watchObservedRunningTime="2026-04-17 23:28:34.810206443 +0000 UTC m=+8.234961739" Apr 17 23:28:35.822997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974615394.mount: Deactivated successfully. Apr 17 23:28:36.514564 containerd[1590]: time="2026-04-17T23:28:36.514473098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:36.516203 containerd[1590]: time="2026-04-17T23:28:36.516041202Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 17 23:28:36.518466 containerd[1590]: time="2026-04-17T23:28:36.517170312Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:36.521155 containerd[1590]: time="2026-04-17T23:28:36.521097958Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:36.522150 containerd[1590]: time="2026-04-17T23:28:36.522090919Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.475265858s" Apr 17 23:28:36.522150 containerd[1590]: time="2026-04-17T23:28:36.522145803Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 17 23:28:36.531097 containerd[1590]: time="2026-04-17T23:28:36.531024559Z" level=info msg="CreateContainer within sandbox \"8d54ee83239d7fd7940f05ad55fda39ed491711a19c48053351a45b1fe3a0e09\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:28:36.549305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640242575.mount: Deactivated successfully. Apr 17 23:28:36.552909 containerd[1590]: time="2026-04-17T23:28:36.552839463Z" level=info msg="CreateContainer within sandbox \"8d54ee83239d7fd7940f05ad55fda39ed491711a19c48053351a45b1fe3a0e09\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4\"" Apr 17 23:28:36.555651 containerd[1590]: time="2026-04-17T23:28:36.553934546Z" level=info msg="StartContainer for \"a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4\"" Apr 17 23:28:36.621764 containerd[1590]: time="2026-04-17T23:28:36.621700727Z" level=info msg="StartContainer for \"a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4\" returns successfully" Apr 17 23:28:36.824932 systemd[1]: run-containerd-runc-k8s.io-a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4-runc.ZkOaKX.mount: Deactivated successfully. Apr 17 23:28:37.794494 kubelet[2685]: I0417 23:28:37.794392 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-c2z2n" podStartSLOduration=2.316282542 podStartE2EDuration="4.794314771s" podCreationTimestamp="2026-04-17 23:28:33 +0000 UTC" firstStartedPulling="2026-04-17 23:28:34.046161505 +0000 UTC m=+7.470916801" lastFinishedPulling="2026-04-17 23:28:36.524193774 +0000 UTC m=+9.948949030" observedRunningTime="2026-04-17 23:28:36.819297716 +0000 UTC m=+10.244053012" watchObservedRunningTime="2026-04-17 23:28:37.794314771 +0000 UTC m=+11.219070067" Apr 17 23:28:43.069217 sudo[1804]: pam_unix(sudo:session): session closed for user root Apr 17 23:28:43.085722 sshd[1800]: pam_unix(sshd:session): session closed for user core Apr 17 23:28:43.094500 systemd[1]: sshd@6-91.99.151.60:22-50.85.169.122:53810.service: Deactivated successfully. Apr 17 23:28:43.105024 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:28:43.107271 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:28:43.113720 systemd-logind[1562]: Removed session 7. Apr 17 23:28:43.856763 update_engine[1566]: I20260417 23:28:43.856076 1566 update_attempter.cc:509] Updating boot flags... Apr 17 23:28:43.957062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 32 scanned by (udev-worker) (3081) Apr 17 23:28:49.785389 kubelet[2685]: I0417 23:28:49.783697 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea36cf7f-b462-4992-b8b7-cacbabf826f3-tigera-ca-bundle\") pod \"calico-typha-67db9f69bd-tqm6k\" (UID: \"ea36cf7f-b462-4992-b8b7-cacbabf826f3\") " pod="calico-system/calico-typha-67db9f69bd-tqm6k" Apr 17 23:28:49.786309 kubelet[2685]: I0417 23:28:49.785737 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ea36cf7f-b462-4992-b8b7-cacbabf826f3-typha-certs\") pod \"calico-typha-67db9f69bd-tqm6k\" (UID: \"ea36cf7f-b462-4992-b8b7-cacbabf826f3\") " pod="calico-system/calico-typha-67db9f69bd-tqm6k" Apr 17 23:28:49.787211 kubelet[2685]: I0417 23:28:49.787168 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87lx\" (UniqueName: \"kubernetes.io/projected/ea36cf7f-b462-4992-b8b7-cacbabf826f3-kube-api-access-z87lx\") pod \"calico-typha-67db9f69bd-tqm6k\" (UID: \"ea36cf7f-b462-4992-b8b7-cacbabf826f3\") " pod="calico-system/calico-typha-67db9f69bd-tqm6k" Apr 17 23:28:49.991277 kubelet[2685]: I0417 23:28:49.990118 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-cni-net-dir\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991277 kubelet[2685]: I0417 23:28:49.990165 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-flexvol-driver-host\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991277 kubelet[2685]: I0417 23:28:49.990186 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-cni-log-dir\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991277 kubelet[2685]: I0417 23:28:49.990212 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-var-run-calico\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991277 kubelet[2685]: I0417 23:28:49.990230 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f345927f-53e5-4506-a004-6ebce958c278-node-certs\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991603 kubelet[2685]: I0417 23:28:49.990243 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-xtables-lock\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991603 kubelet[2685]: I0417 23:28:49.990266 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-cni-bin-dir\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991603 kubelet[2685]: I0417 23:28:49.990298 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-sys-fs\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991603 kubelet[2685]: I0417 23:28:49.990312 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-policysync\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991603 kubelet[2685]: I0417 23:28:49.990326 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-var-lib-calico\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991756 kubelet[2685]: I0417 23:28:49.990343 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-lib-modules\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991756 kubelet[2685]: I0417 23:28:49.990359 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-bpffs\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991756 kubelet[2685]: I0417 23:28:49.990372 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f345927f-53e5-4506-a004-6ebce958c278-nodeproc\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991756 kubelet[2685]: I0417 23:28:49.990387 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f345927f-53e5-4506-a004-6ebce958c278-tigera-ca-bundle\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:49.991756 kubelet[2685]: I0417 23:28:49.990405 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g754\" (UniqueName: \"kubernetes.io/projected/f345927f-53e5-4506-a004-6ebce958c278-kube-api-access-4g754\") pod \"calico-node-5fn97\" (UID: \"f345927f-53e5-4506-a004-6ebce958c278\") " pod="calico-system/calico-node-5fn97" Apr 17 23:28:50.005705 kubelet[2685]: E0417 23:28:50.005356 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:28:50.077177 containerd[1590]: time="2026-04-17T23:28:50.074984035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67db9f69bd-tqm6k,Uid:ea36cf7f-b462-4992-b8b7-cacbabf826f3,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:50.091704 kubelet[2685]: I0417 23:28:50.091600 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d-registration-dir\") pod \"csi-node-driver-csfn9\" (UID: \"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d\") " pod="calico-system/csi-node-driver-csfn9" Apr 17 23:28:50.091704 kubelet[2685]: I0417 23:28:50.091694 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d-varrun\") pod \"csi-node-driver-csfn9\" (UID: \"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d\") " pod="calico-system/csi-node-driver-csfn9" Apr 17 23:28:50.091950 kubelet[2685]: I0417 23:28:50.091798 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d-kubelet-dir\") pod \"csi-node-driver-csfn9\" (UID: \"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d\") " pod="calico-system/csi-node-driver-csfn9" Apr 17 23:28:50.091950 kubelet[2685]: I0417 23:28:50.091863 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9kzx\" (UniqueName: \"kubernetes.io/projected/c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d-kube-api-access-q9kzx\") pod \"csi-node-driver-csfn9\" (UID: \"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d\") " pod="calico-system/csi-node-driver-csfn9" Apr 17 23:28:50.092058 kubelet[2685]: I0417 23:28:50.092028 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d-socket-dir\") pod \"csi-node-driver-csfn9\" (UID: \"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d\") " pod="calico-system/csi-node-driver-csfn9" Apr 17 23:28:50.102953 kubelet[2685]: E0417 23:28:50.102263 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.102953 kubelet[2685]: W0417 23:28:50.102421 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.102953 kubelet[2685]: E0417 23:28:50.102457 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.104503 kubelet[2685]: E0417 23:28:50.103242 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.104503 kubelet[2685]: W0417 23:28:50.103265 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.104503 kubelet[2685]: E0417 23:28:50.103392 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.104503 kubelet[2685]: E0417 23:28:50.104133 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.104503 kubelet[2685]: W0417 23:28:50.104312 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.104503 kubelet[2685]: E0417 23:28:50.104335 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.110235 kubelet[2685]: E0417 23:28:50.106892 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.110235 kubelet[2685]: W0417 23:28:50.106919 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.110235 kubelet[2685]: E0417 23:28:50.107595 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.110235 kubelet[2685]: E0417 23:28:50.110235 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.110438 kubelet[2685]: W0417 23:28:50.110273 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.110438 kubelet[2685]: E0417 23:28:50.110297 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.111328 kubelet[2685]: E0417 23:28:50.111118 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.111328 kubelet[2685]: W0417 23:28:50.111142 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.111328 kubelet[2685]: E0417 23:28:50.111190 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.117138 kubelet[2685]: E0417 23:28:50.117109 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.117736 kubelet[2685]: W0417 23:28:50.117308 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.118780 kubelet[2685]: E0417 23:28:50.118760 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.120617 kubelet[2685]: E0417 23:28:50.120577 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.120849 kubelet[2685]: W0417 23:28:50.120819 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.121006 kubelet[2685]: E0417 23:28:50.120977 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.121887 kubelet[2685]: E0417 23:28:50.121858 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.122138 kubelet[2685]: W0417 23:28:50.122037 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.122338 kubelet[2685]: E0417 23:28:50.122309 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.130431 kubelet[2685]: E0417 23:28:50.130339 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.130431 kubelet[2685]: W0417 23:28:50.130367 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.130431 kubelet[2685]: E0417 23:28:50.130391 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.134983 containerd[1590]: time="2026-04-17T23:28:50.134732292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:50.134983 containerd[1590]: time="2026-04-17T23:28:50.134812004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:50.134983 containerd[1590]: time="2026-04-17T23:28:50.134828610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:50.134983 containerd[1590]: time="2026-04-17T23:28:50.134934532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:50.184442 containerd[1590]: time="2026-04-17T23:28:50.184364290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67db9f69bd-tqm6k,Uid:ea36cf7f-b462-4992-b8b7-cacbabf826f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"501fe1bd99124a420ca280ba512434af9b0d6fa38525071b904a070a12888ba6\"" Apr 17 23:28:50.186468 containerd[1590]: time="2026-04-17T23:28:50.186124830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:28:50.193337 containerd[1590]: time="2026-04-17T23:28:50.193230613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fn97,Uid:f345927f-53e5-4506-a004-6ebce958c278,Namespace:calico-system,Attempt:0,}" Apr 17 23:28:50.193683 kubelet[2685]: E0417 23:28:50.192903 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.193744 kubelet[2685]: W0417 23:28:50.193691 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.193744 kubelet[2685]: E0417 23:28:50.193734 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.194264 kubelet[2685]: E0417 23:28:50.194060 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.194264 kubelet[2685]: W0417 23:28:50.194075 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.194264 kubelet[2685]: E0417 23:28:50.194087 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.194516 kubelet[2685]: E0417 23:28:50.194500 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.194612 kubelet[2685]: W0417 23:28:50.194597 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.194801 kubelet[2685]: E0417 23:28:50.194696 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.194939 kubelet[2685]: E0417 23:28:50.194927 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.195004 kubelet[2685]: W0417 23:28:50.194992 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.195214 kubelet[2685]: E0417 23:28:50.195116 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.195576 kubelet[2685]: E0417 23:28:50.195538 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.195786 kubelet[2685]: W0417 23:28:50.195657 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.195786 kubelet[2685]: E0417 23:28:50.195676 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.196284 kubelet[2685]: E0417 23:28:50.196144 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.196284 kubelet[2685]: W0417 23:28:50.196160 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.196284 kubelet[2685]: E0417 23:28:50.196173 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.196624 kubelet[2685]: E0417 23:28:50.196610 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.196836 kubelet[2685]: W0417 23:28:50.196681 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.196836 kubelet[2685]: E0417 23:28:50.196701 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.196996 kubelet[2685]: E0417 23:28:50.196984 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.197093 kubelet[2685]: W0417 23:28:50.197077 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.197225 kubelet[2685]: E0417 23:28:50.197131 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.197584 kubelet[2685]: E0417 23:28:50.197487 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.197584 kubelet[2685]: W0417 23:28:50.197499 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.197584 kubelet[2685]: E0417 23:28:50.197513 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.197851 kubelet[2685]: E0417 23:28:50.197839 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.197993 kubelet[2685]: W0417 23:28:50.197908 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.197993 kubelet[2685]: E0417 23:28:50.197926 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.198325 kubelet[2685]: E0417 23:28:50.198309 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.198530 kubelet[2685]: W0417 23:28:50.198397 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.198530 kubelet[2685]: E0417 23:28:50.198416 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.198734 kubelet[2685]: E0417 23:28:50.198723 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.198796 kubelet[2685]: W0417 23:28:50.198785 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.198850 kubelet[2685]: E0417 23:28:50.198839 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.199823 kubelet[2685]: E0417 23:28:50.199593 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.199823 kubelet[2685]: W0417 23:28:50.199608 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.199823 kubelet[2685]: E0417 23:28:50.199622 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.200588 kubelet[2685]: E0417 23:28:50.200473 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.200588 kubelet[2685]: W0417 23:28:50.200510 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.200588 kubelet[2685]: E0417 23:28:50.200526 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.201420 kubelet[2685]: E0417 23:28:50.201336 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.201420 kubelet[2685]: W0417 23:28:50.201354 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.201420 kubelet[2685]: E0417 23:28:50.201367 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.201810 kubelet[2685]: E0417 23:28:50.201741 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.201810 kubelet[2685]: W0417 23:28:50.201753 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.201810 kubelet[2685]: E0417 23:28:50.201763 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.202094 kubelet[2685]: E0417 23:28:50.202083 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.202152 kubelet[2685]: W0417 23:28:50.202140 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.202288 kubelet[2685]: E0417 23:28:50.202227 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.202687 kubelet[2685]: E0417 23:28:50.202599 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.202687 kubelet[2685]: W0417 23:28:50.202616 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.202687 kubelet[2685]: E0417 23:28:50.202627 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.203082 kubelet[2685]: E0417 23:28:50.203068 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.203313 kubelet[2685]: W0417 23:28:50.203152 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.203313 kubelet[2685]: E0417 23:28:50.203169 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.203612 kubelet[2685]: E0417 23:28:50.203600 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.203684 kubelet[2685]: W0417 23:28:50.203673 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.203744 kubelet[2685]: E0417 23:28:50.203733 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.204393 kubelet[2685]: E0417 23:28:50.204379 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.204596 kubelet[2685]: W0417 23:28:50.204465 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.204596 kubelet[2685]: E0417 23:28:50.204483 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.204780 kubelet[2685]: E0417 23:28:50.204770 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.204837 kubelet[2685]: W0417 23:28:50.204827 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.204973 kubelet[2685]: E0417 23:28:50.204887 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.205289 kubelet[2685]: E0417 23:28:50.205274 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.205478 kubelet[2685]: W0417 23:28:50.205372 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.205478 kubelet[2685]: E0417 23:28:50.205393 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.205739 kubelet[2685]: E0417 23:28:50.205728 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.205920 kubelet[2685]: W0417 23:28:50.205787 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.205920 kubelet[2685]: E0417 23:28:50.205802 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.206112 kubelet[2685]: E0417 23:28:50.206099 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.206186 kubelet[2685]: W0417 23:28:50.206173 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.206291 kubelet[2685]: E0417 23:28:50.206278 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.224343 kubelet[2685]: E0417 23:28:50.224248 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:50.224343 kubelet[2685]: W0417 23:28:50.224274 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:50.224343 kubelet[2685]: E0417 23:28:50.224297 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:50.233180 containerd[1590]: time="2026-04-17T23:28:50.232885728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:28:50.233180 containerd[1590]: time="2026-04-17T23:28:50.232968200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:28:50.233180 containerd[1590]: time="2026-04-17T23:28:50.232979845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:50.233180 containerd[1590]: time="2026-04-17T23:28:50.233104775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:28:50.275990 containerd[1590]: time="2026-04-17T23:28:50.275873686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fn97,Uid:f345927f-53e5-4506-a004-6ebce958c278,Namespace:calico-system,Attempt:0,} returns sandbox id \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\"" Apr 17 23:28:51.685339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505003406.mount: Deactivated successfully. Apr 17 23:28:51.735081 kubelet[2685]: E0417 23:28:51.734672 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:28:52.237497 containerd[1590]: time="2026-04-17T23:28:52.237381360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:52.239815 containerd[1590]: time="2026-04-17T23:28:52.239249679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 17 23:28:52.239815 containerd[1590]: time="2026-04-17T23:28:52.239604688Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:52.242354 containerd[1590]: time="2026-04-17T23:28:52.242303109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:52.243583 containerd[1590]: time="2026-04-17T23:28:52.243551043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.057385717s" Apr 17 23:28:52.243706 containerd[1590]: time="2026-04-17T23:28:52.243691854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 17 23:28:52.244911 containerd[1590]: time="2026-04-17T23:28:52.244863520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:28:52.268499 containerd[1590]: time="2026-04-17T23:28:52.268376829Z" level=info msg="CreateContainer within sandbox \"501fe1bd99124a420ca280ba512434af9b0d6fa38525071b904a070a12888ba6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:28:52.286429 containerd[1590]: time="2026-04-17T23:28:52.286279217Z" level=info msg="CreateContainer within sandbox \"501fe1bd99124a420ca280ba512434af9b0d6fa38525071b904a070a12888ba6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0f07ee4251f583c18824697d9374af16030a2711dbda95a17cb45b97fc783454\"" Apr 17 23:28:52.287042 containerd[1590]: time="2026-04-17T23:28:52.286992597Z" level=info msg="StartContainer for \"0f07ee4251f583c18824697d9374af16030a2711dbda95a17cb45b97fc783454\"" Apr 17 23:28:52.361223 containerd[1590]: time="2026-04-17T23:28:52.361008146Z" level=info msg="StartContainer for \"0f07ee4251f583c18824697d9374af16030a2711dbda95a17cb45b97fc783454\" returns successfully" Apr 17 23:28:52.895445 kubelet[2685]: E0417 23:28:52.895403 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.896239 kubelet[2685]: W0417 23:28:52.895555 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.896239 kubelet[2685]: E0417 23:28:52.895588 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.896832 kubelet[2685]: E0417 23:28:52.896475 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.896832 kubelet[2685]: W0417 23:28:52.896500 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.896832 kubelet[2685]: E0417 23:28:52.896574 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.897262 kubelet[2685]: E0417 23:28:52.897099 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.897262 kubelet[2685]: W0417 23:28:52.897117 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.897262 kubelet[2685]: E0417 23:28:52.897134 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.897507 kubelet[2685]: E0417 23:28:52.897491 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.897718 kubelet[2685]: W0417 23:28:52.897574 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.897718 kubelet[2685]: E0417 23:28:52.897595 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.897987 kubelet[2685]: E0417 23:28:52.897971 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.898262 kubelet[2685]: W0417 23:28:52.898089 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.898262 kubelet[2685]: E0417 23:28:52.898110 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.898477 kubelet[2685]: E0417 23:28:52.898461 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.898576 kubelet[2685]: W0417 23:28:52.898559 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.898660 kubelet[2685]: E0417 23:28:52.898644 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.898940 kubelet[2685]: E0417 23:28:52.898925 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.899230 kubelet[2685]: W0417 23:28:52.899027 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.899230 kubelet[2685]: E0417 23:28:52.899078 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.899449 kubelet[2685]: E0417 23:28:52.899408 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.899536 kubelet[2685]: W0417 23:28:52.899520 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.899611 kubelet[2685]: E0417 23:28:52.899600 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.900006 kubelet[2685]: E0417 23:28:52.899918 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.900006 kubelet[2685]: W0417 23:28:52.899930 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.900006 kubelet[2685]: E0417 23:28:52.899940 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.900198 kubelet[2685]: E0417 23:28:52.900185 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.900259 kubelet[2685]: W0417 23:28:52.900248 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.900390 kubelet[2685]: E0417 23:28:52.900306 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.900495 kubelet[2685]: E0417 23:28:52.900484 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.900559 kubelet[2685]: W0417 23:28:52.900545 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.900618 kubelet[2685]: E0417 23:28:52.900608 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.900924 kubelet[2685]: E0417 23:28:52.900817 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.900924 kubelet[2685]: W0417 23:28:52.900828 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.900924 kubelet[2685]: E0417 23:28:52.900837 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.901153 kubelet[2685]: E0417 23:28:52.901142 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.901279 kubelet[2685]: W0417 23:28:52.901267 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.901429 kubelet[2685]: E0417 23:28:52.901332 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.901676 kubelet[2685]: E0417 23:28:52.901520 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.901676 kubelet[2685]: W0417 23:28:52.901530 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.901676 kubelet[2685]: E0417 23:28:52.901540 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.901832 kubelet[2685]: E0417 23:28:52.901821 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.901886 kubelet[2685]: W0417 23:28:52.901875 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.901943 kubelet[2685]: E0417 23:28:52.901933 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.916294 kubelet[2685]: E0417 23:28:52.916254 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.916294 kubelet[2685]: W0417 23:28:52.916278 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.916294 kubelet[2685]: E0417 23:28:52.916299 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.916692 kubelet[2685]: E0417 23:28:52.916562 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.916692 kubelet[2685]: W0417 23:28:52.916575 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.916692 kubelet[2685]: E0417 23:28:52.916587 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.917383 kubelet[2685]: E0417 23:28:52.916783 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.917383 kubelet[2685]: W0417 23:28:52.916796 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.917383 kubelet[2685]: E0417 23:28:52.916807 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.917383 kubelet[2685]: E0417 23:28:52.917059 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.917383 kubelet[2685]: W0417 23:28:52.917070 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.917383 kubelet[2685]: E0417 23:28:52.917079 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.917383 kubelet[2685]: E0417 23:28:52.917248 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.917383 kubelet[2685]: W0417 23:28:52.917257 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.917383 kubelet[2685]: E0417 23:28:52.917266 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.918595 kubelet[2685]: E0417 23:28:52.917824 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.918595 kubelet[2685]: W0417 23:28:52.917844 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.918595 kubelet[2685]: E0417 23:28:52.917861 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.918595 kubelet[2685]: E0417 23:28:52.918062 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.918595 kubelet[2685]: W0417 23:28:52.918072 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.918595 kubelet[2685]: E0417 23:28:52.918081 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.918595 kubelet[2685]: E0417 23:28:52.918509 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.918595 kubelet[2685]: W0417 23:28:52.918523 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.919058 kubelet[2685]: E0417 23:28:52.918540 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.919058 kubelet[2685]: E0417 23:28:52.918920 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.919058 kubelet[2685]: W0417 23:28:52.918934 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.919058 kubelet[2685]: E0417 23:28:52.918951 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.919367 kubelet[2685]: E0417 23:28:52.919159 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.919367 kubelet[2685]: W0417 23:28:52.919229 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.919367 kubelet[2685]: E0417 23:28:52.919246 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.919453 kubelet[2685]: E0417 23:28:52.919432 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.919453 kubelet[2685]: W0417 23:28:52.919442 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.919507 kubelet[2685]: E0417 23:28:52.919454 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.920083 kubelet[2685]: E0417 23:28:52.919651 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.920083 kubelet[2685]: W0417 23:28:52.919673 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.920083 kubelet[2685]: E0417 23:28:52.919685 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.920233 kubelet[2685]: E0417 23:28:52.920165 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.920233 kubelet[2685]: W0417 23:28:52.920198 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.920233 kubelet[2685]: E0417 23:28:52.920214 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.920509 kubelet[2685]: E0417 23:28:52.920492 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.920559 kubelet[2685]: W0417 23:28:52.920514 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.920559 kubelet[2685]: E0417 23:28:52.920530 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.920788 kubelet[2685]: E0417 23:28:52.920768 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.920831 kubelet[2685]: W0417 23:28:52.920788 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.920831 kubelet[2685]: E0417 23:28:52.920803 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.921227 kubelet[2685]: E0417 23:28:52.921184 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.921227 kubelet[2685]: W0417 23:28:52.921211 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.921227 kubelet[2685]: E0417 23:28:52.921229 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.921494 kubelet[2685]: E0417 23:28:52.921480 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.921539 kubelet[2685]: W0417 23:28:52.921496 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.921539 kubelet[2685]: E0417 23:28:52.921509 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:52.922134 kubelet[2685]: E0417 23:28:52.922117 2685 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:28:52.922214 kubelet[2685]: W0417 23:28:52.922134 2685 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:28:52.922214 kubelet[2685]: E0417 23:28:52.922148 2685 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:28:53.255960 systemd[1]: run-containerd-runc-k8s.io-0f07ee4251f583c18824697d9374af16030a2711dbda95a17cb45b97fc783454-runc.a6x3sP.mount: Deactivated successfully. Apr 17 23:28:53.652099 containerd[1590]: time="2026-04-17T23:28:53.651278064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:53.654102 containerd[1590]: time="2026-04-17T23:28:53.654033184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 17 23:28:53.655078 containerd[1590]: time="2026-04-17T23:28:53.655018087Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:53.659340 containerd[1590]: time="2026-04-17T23:28:53.659277170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:28:53.661793 containerd[1590]: time="2026-04-17T23:28:53.661749111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.416839855s" Apr 17 23:28:53.662025 containerd[1590]: time="2026-04-17T23:28:53.661838782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 17 23:28:53.667795 containerd[1590]: time="2026-04-17T23:28:53.667652527Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:28:53.684035 containerd[1590]: time="2026-04-17T23:28:53.683835563Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c65dc61e216a8f456c152a2e3e0e86d3692d5513f042ab304459a76a36bfdc28\"" Apr 17 23:28:53.684599 containerd[1590]: time="2026-04-17T23:28:53.684569619Z" level=info msg="StartContainer for \"c65dc61e216a8f456c152a2e3e0e86d3692d5513f042ab304459a76a36bfdc28\"" Apr 17 23:28:53.733157 kubelet[2685]: E0417 23:28:53.733036 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:28:53.755211 containerd[1590]: time="2026-04-17T23:28:53.755104464Z" level=info msg="StartContainer for \"c65dc61e216a8f456c152a2e3e0e86d3692d5513f042ab304459a76a36bfdc28\" returns successfully" Apr 17 23:28:53.859774 kubelet[2685]: I0417 23:28:53.859448 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:28:53.872741 containerd[1590]: time="2026-04-17T23:28:53.872343294Z" level=info msg="shim disconnected" id=c65dc61e216a8f456c152a2e3e0e86d3692d5513f042ab304459a76a36bfdc28 namespace=k8s.io Apr 17 23:28:53.872741 containerd[1590]: time="2026-04-17T23:28:53.872404596Z" level=warning msg="cleaning up after shim disconnected" id=c65dc61e216a8f456c152a2e3e0e86d3692d5513f042ab304459a76a36bfdc28 namespace=k8s.io Apr 17 23:28:53.872741 containerd[1590]: time="2026-04-17T23:28:53.872415559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:28:53.894221 kubelet[2685]: I0417 23:28:53.893627 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67db9f69bd-tqm6k" podStartSLOduration=2.8346951369999998 podStartE2EDuration="4.893596736s" podCreationTimestamp="2026-04-17 23:28:49 +0000 UTC" firstStartedPulling="2026-04-17 23:28:50.185856803 +0000 UTC m=+23.610612099" lastFinishedPulling="2026-04-17 23:28:52.244758402 +0000 UTC m=+25.669513698" observedRunningTime="2026-04-17 23:28:52.868570919 +0000 UTC m=+26.293326255" watchObservedRunningTime="2026-04-17 23:28:53.893596736 +0000 UTC m=+27.318352032" Apr 17 23:28:54.254818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c65dc61e216a8f456c152a2e3e0e86d3692d5513f042ab304459a76a36bfdc28-rootfs.mount: Deactivated successfully. Apr 17 23:28:54.873189 containerd[1590]: time="2026-04-17T23:28:54.872128989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:28:55.734183 kubelet[2685]: E0417 23:28:55.733303 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:28:57.733156 kubelet[2685]: E0417 23:28:57.733105 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:28:59.733519 kubelet[2685]: E0417 23:28:59.733461 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:29:00.085806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103652995.mount: Deactivated successfully. Apr 17 23:29:00.113006 containerd[1590]: time="2026-04-17T23:29:00.112921311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:00.115566 containerd[1590]: time="2026-04-17T23:29:00.115489151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 17 23:29:00.116974 containerd[1590]: time="2026-04-17T23:29:00.116925572Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:00.120324 containerd[1590]: time="2026-04-17T23:29:00.120249292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:00.121983 containerd[1590]: time="2026-04-17T23:29:00.121941260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 5.249769058s" Apr 17 23:29:00.122087 containerd[1590]: time="2026-04-17T23:29:00.121990273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 17 23:29:00.128214 containerd[1590]: time="2026-04-17T23:29:00.128020230Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:29:00.151783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275303304.mount: Deactivated successfully. Apr 17 23:29:00.157746 containerd[1590]: time="2026-04-17T23:29:00.157457386Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"0526b86e37ee329d6d19e9be2154f1ce35f13d3ba9449aea5f878b901ea5b3b7\"" Apr 17 23:29:00.159013 containerd[1590]: time="2026-04-17T23:29:00.158957463Z" level=info msg="StartContainer for \"0526b86e37ee329d6d19e9be2154f1ce35f13d3ba9449aea5f878b901ea5b3b7\"" Apr 17 23:29:00.227161 containerd[1590]: time="2026-04-17T23:29:00.227112114Z" level=info msg="StartContainer for \"0526b86e37ee329d6d19e9be2154f1ce35f13d3ba9449aea5f878b901ea5b3b7\" returns successfully" Apr 17 23:29:00.454567 containerd[1590]: time="2026-04-17T23:29:00.454290320Z" level=info msg="shim disconnected" id=0526b86e37ee329d6d19e9be2154f1ce35f13d3ba9449aea5f878b901ea5b3b7 namespace=k8s.io Apr 17 23:29:00.454567 containerd[1590]: time="2026-04-17T23:29:00.454412392Z" level=warning msg="cleaning up after shim disconnected" id=0526b86e37ee329d6d19e9be2154f1ce35f13d3ba9449aea5f878b901ea5b3b7 namespace=k8s.io Apr 17 23:29:00.454567 containerd[1590]: time="2026-04-17T23:29:00.454422515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:29:00.884397 containerd[1590]: time="2026-04-17T23:29:00.884250192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:29:01.086568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0526b86e37ee329d6d19e9be2154f1ce35f13d3ba9449aea5f878b901ea5b3b7-rootfs.mount: Deactivated successfully. Apr 17 23:29:01.734680 kubelet[2685]: E0417 23:29:01.734188 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:29:03.416531 containerd[1590]: time="2026-04-17T23:29:03.416455922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:03.418565 containerd[1590]: time="2026-04-17T23:29:03.418499210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 17 23:29:03.419771 containerd[1590]: time="2026-04-17T23:29:03.419719942Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:03.429928 containerd[1590]: time="2026-04-17T23:29:03.428331161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.544019714s" Apr 17 23:29:03.429928 containerd[1590]: time="2026-04-17T23:29:03.428381654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 17 23:29:03.429928 containerd[1590]: time="2026-04-17T23:29:03.428843124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:03.434972 containerd[1590]: time="2026-04-17T23:29:03.434929579Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:29:03.451152 containerd[1590]: time="2026-04-17T23:29:03.451006344Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"63c9bdb03974d7e86afeae5fcfdfbc81934af58ad559a39ba825b02336a9c3f6\"" Apr 17 23:29:03.454214 containerd[1590]: time="2026-04-17T23:29:03.453489778Z" level=info msg="StartContainer for \"63c9bdb03974d7e86afeae5fcfdfbc81934af58ad559a39ba825b02336a9c3f6\"" Apr 17 23:29:03.528006 containerd[1590]: time="2026-04-17T23:29:03.527958705Z" level=info msg="StartContainer for \"63c9bdb03974d7e86afeae5fcfdfbc81934af58ad559a39ba825b02336a9c3f6\" returns successfully" Apr 17 23:29:03.734077 kubelet[2685]: E0417 23:29:03.733590 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-csfn9" podUID="c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d" Apr 17 23:29:04.125909 kubelet[2685]: I0417 23:29:04.124913 2685 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:29:04.148472 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63c9bdb03974d7e86afeae5fcfdfbc81934af58ad559a39ba825b02336a9c3f6-rootfs.mount: Deactivated successfully. Apr 17 23:29:04.218166 containerd[1590]: time="2026-04-17T23:29:04.216454601Z" level=info msg="shim disconnected" id=63c9bdb03974d7e86afeae5fcfdfbc81934af58ad559a39ba825b02336a9c3f6 namespace=k8s.io Apr 17 23:29:04.218166 containerd[1590]: time="2026-04-17T23:29:04.216601875Z" level=warning msg="cleaning up after shim disconnected" id=63c9bdb03974d7e86afeae5fcfdfbc81934af58ad559a39ba825b02336a9c3f6 namespace=k8s.io Apr 17 23:29:04.218166 containerd[1590]: time="2026-04-17T23:29:04.216611318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:29:04.309430 kubelet[2685]: I0417 23:29:04.308824 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fbsg\" (UniqueName: \"kubernetes.io/projected/cbe78961-645a-4299-9350-a4feb03fb521-kube-api-access-6fbsg\") pod \"goldmane-5b85766d88-m9z9b\" (UID: \"cbe78961-645a-4299-9350-a4feb03fb521\") " pod="calico-system/goldmane-5b85766d88-m9z9b" Apr 17 23:29:04.311451 kubelet[2685]: I0417 23:29:04.311395 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4psrr\" (UniqueName: \"kubernetes.io/projected/c8d00a3a-510e-455b-af6d-68cf933b1622-kube-api-access-4psrr\") pod \"coredns-674b8bbfcf-frnhq\" (UID: \"c8d00a3a-510e-455b-af6d-68cf933b1622\") " pod="kube-system/coredns-674b8bbfcf-frnhq" Apr 17 23:29:04.311598 kubelet[2685]: I0417 23:29:04.311466 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8d00a3a-510e-455b-af6d-68cf933b1622-config-volume\") pod \"coredns-674b8bbfcf-frnhq\" (UID: \"c8d00a3a-510e-455b-af6d-68cf933b1622\") " pod="kube-system/coredns-674b8bbfcf-frnhq" Apr 17 23:29:04.311598 kubelet[2685]: I0417 23:29:04.311503 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2lc\" (UniqueName: \"kubernetes.io/projected/27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7-kube-api-access-9w2lc\") pod \"calico-kube-controllers-c84bc646f-h6gw8\" (UID: \"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7\") " pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" Apr 17 23:29:04.311598 kubelet[2685]: I0417 23:29:04.311532 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c11c886e-9476-40bc-ba5f-51336e501e07-config-volume\") pod \"coredns-674b8bbfcf-kdlnn\" (UID: \"c11c886e-9476-40bc-ba5f-51336e501e07\") " pod="kube-system/coredns-674b8bbfcf-kdlnn" Apr 17 23:29:04.311598 kubelet[2685]: I0417 23:29:04.311564 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qc4x\" (UniqueName: \"kubernetes.io/projected/97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3-kube-api-access-2qc4x\") pod \"calico-apiserver-7565646fd6-lg558\" (UID: \"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3\") " pod="calico-system/calico-apiserver-7565646fd6-lg558" Apr 17 23:29:04.311598 kubelet[2685]: I0417 23:29:04.311592 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-nginx-config\") pod \"whisker-54865d9c69-vrxcx\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " pod="calico-system/whisker-54865d9c69-vrxcx" Apr 17 23:29:04.311727 kubelet[2685]: I0417 23:29:04.311633 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhh8q\" (UniqueName: \"kubernetes.io/projected/c11c886e-9476-40bc-ba5f-51336e501e07-kube-api-access-zhh8q\") pod \"coredns-674b8bbfcf-kdlnn\" (UID: \"c11c886e-9476-40bc-ba5f-51336e501e07\") " pod="kube-system/coredns-674b8bbfcf-kdlnn" Apr 17 23:29:04.311727 kubelet[2685]: I0417 23:29:04.311674 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-ca-bundle\") pod \"whisker-54865d9c69-vrxcx\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " pod="calico-system/whisker-54865d9c69-vrxcx" Apr 17 23:29:04.311727 kubelet[2685]: I0417 23:29:04.311704 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e2b312e-b94b-42fd-8904-f8f9ac9041dd-calico-apiserver-certs\") pod \"calico-apiserver-7565646fd6-h7t75\" (UID: \"8e2b312e-b94b-42fd-8904-f8f9ac9041dd\") " pod="calico-system/calico-apiserver-7565646fd6-h7t75" Apr 17 23:29:04.311796 kubelet[2685]: I0417 23:29:04.311739 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7-tigera-ca-bundle\") pod \"calico-kube-controllers-c84bc646f-h6gw8\" (UID: \"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7\") " pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" Apr 17 23:29:04.311796 kubelet[2685]: I0417 23:29:04.311770 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-backend-key-pair\") pod \"whisker-54865d9c69-vrxcx\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " pod="calico-system/whisker-54865d9c69-vrxcx" Apr 17 23:29:04.311846 kubelet[2685]: I0417 23:29:04.311800 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkwl\" (UniqueName: \"kubernetes.io/projected/8e2b312e-b94b-42fd-8904-f8f9ac9041dd-kube-api-access-trkwl\") pod \"calico-apiserver-7565646fd6-h7t75\" (UID: \"8e2b312e-b94b-42fd-8904-f8f9ac9041dd\") " pod="calico-system/calico-apiserver-7565646fd6-h7t75" Apr 17 23:29:04.311875 kubelet[2685]: I0417 23:29:04.311842 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cbe78961-645a-4299-9350-a4feb03fb521-goldmane-key-pair\") pod \"goldmane-5b85766d88-m9z9b\" (UID: \"cbe78961-645a-4299-9350-a4feb03fb521\") " pod="calico-system/goldmane-5b85766d88-m9z9b" Apr 17 23:29:04.311931 kubelet[2685]: I0417 23:29:04.311897 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbe78961-645a-4299-9350-a4feb03fb521-config\") pod \"goldmane-5b85766d88-m9z9b\" (UID: \"cbe78961-645a-4299-9350-a4feb03fb521\") " pod="calico-system/goldmane-5b85766d88-m9z9b" Apr 17 23:29:04.311971 kubelet[2685]: I0417 23:29:04.311948 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbe78961-645a-4299-9350-a4feb03fb521-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-m9z9b\" (UID: \"cbe78961-645a-4299-9350-a4feb03fb521\") " pod="calico-system/goldmane-5b85766d88-m9z9b" Apr 17 23:29:04.312009 kubelet[2685]: I0417 23:29:04.311980 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3-calico-apiserver-certs\") pod \"calico-apiserver-7565646fd6-lg558\" (UID: \"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3\") " pod="calico-system/calico-apiserver-7565646fd6-lg558" Apr 17 23:29:04.312038 kubelet[2685]: I0417 23:29:04.312009 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6h7\" (UniqueName: \"kubernetes.io/projected/c266dc28-a296-4595-9d40-3d6c60f90528-kube-api-access-nn6h7\") pod \"whisker-54865d9c69-vrxcx\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " pod="calico-system/whisker-54865d9c69-vrxcx" Apr 17 23:29:04.507538 containerd[1590]: time="2026-04-17T23:29:04.507432192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kdlnn,Uid:c11c886e-9476-40bc-ba5f-51336e501e07,Namespace:kube-system,Attempt:0,}" Apr 17 23:29:04.533897 containerd[1590]: time="2026-04-17T23:29:04.532999874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frnhq,Uid:c8d00a3a-510e-455b-af6d-68cf933b1622,Namespace:kube-system,Attempt:0,}" Apr 17 23:29:04.556958 containerd[1590]: time="2026-04-17T23:29:04.556916413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54865d9c69-vrxcx,Uid:c266dc28-a296-4595-9d40-3d6c60f90528,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:04.557933 containerd[1590]: time="2026-04-17T23:29:04.557222604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-lg558,Uid:97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:04.567537 containerd[1590]: time="2026-04-17T23:29:04.567472938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c84bc646f-h6gw8,Uid:27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:04.582078 containerd[1590]: time="2026-04-17T23:29:04.581994661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-h7t75,Uid:8e2b312e-b94b-42fd-8904-f8f9ac9041dd,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:04.584781 containerd[1590]: time="2026-04-17T23:29:04.584713331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-m9z9b,Uid:cbe78961-645a-4299-9350-a4feb03fb521,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:04.716305 containerd[1590]: time="2026-04-17T23:29:04.716256116Z" level=error msg="Failed to destroy network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.717325 containerd[1590]: time="2026-04-17T23:29:04.717272191Z" level=error msg="encountered an error cleaning up failed sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.717475 containerd[1590]: time="2026-04-17T23:29:04.717454034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54865d9c69-vrxcx,Uid:c266dc28-a296-4595-9d40-3d6c60f90528,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.717834 kubelet[2685]: E0417 23:29:04.717792 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.719115 kubelet[2685]: E0417 23:29:04.718037 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54865d9c69-vrxcx" Apr 17 23:29:04.719115 kubelet[2685]: E0417 23:29:04.718088 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54865d9c69-vrxcx" Apr 17 23:29:04.719115 kubelet[2685]: E0417 23:29:04.718144 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54865d9c69-vrxcx_calico-system(c266dc28-a296-4595-9d40-3d6c60f90528)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54865d9c69-vrxcx_calico-system(c266dc28-a296-4595-9d40-3d6c60f90528)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54865d9c69-vrxcx" podUID="c266dc28-a296-4595-9d40-3d6c60f90528" Apr 17 23:29:04.748210 containerd[1590]: time="2026-04-17T23:29:04.748127137Z" level=error msg="Failed to destroy network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.754994 containerd[1590]: time="2026-04-17T23:29:04.754531501Z" level=error msg="encountered an error cleaning up failed sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.754994 containerd[1590]: time="2026-04-17T23:29:04.754914189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kdlnn,Uid:c11c886e-9476-40bc-ba5f-51336e501e07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.755758 kubelet[2685]: E0417 23:29:04.755714 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.759691 kubelet[2685]: E0417 23:29:04.755778 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kdlnn" Apr 17 23:29:04.759691 kubelet[2685]: E0417 23:29:04.755800 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kdlnn" Apr 17 23:29:04.759691 kubelet[2685]: E0417 23:29:04.755851 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kdlnn_kube-system(c11c886e-9476-40bc-ba5f-51336e501e07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kdlnn_kube-system(c11c886e-9476-40bc-ba5f-51336e501e07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kdlnn" podUID="c11c886e-9476-40bc-ba5f-51336e501e07" Apr 17 23:29:04.785565 containerd[1590]: time="2026-04-17T23:29:04.785513636Z" level=error msg="Failed to destroy network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.787438 containerd[1590]: time="2026-04-17T23:29:04.786893556Z" level=error msg="encountered an error cleaning up failed sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.787756 containerd[1590]: time="2026-04-17T23:29:04.787716146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frnhq,Uid:c8d00a3a-510e-455b-af6d-68cf933b1622,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.788632 kubelet[2685]: E0417 23:29:04.788583 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.788739 kubelet[2685]: E0417 23:29:04.788652 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-frnhq" Apr 17 23:29:04.788739 kubelet[2685]: E0417 23:29:04.788675 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-frnhq" Apr 17 23:29:04.788803 kubelet[2685]: E0417 23:29:04.788728 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-frnhq_kube-system(c8d00a3a-510e-455b-af6d-68cf933b1622)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-frnhq_kube-system(c8d00a3a-510e-455b-af6d-68cf933b1622)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-frnhq" podUID="c8d00a3a-510e-455b-af6d-68cf933b1622" Apr 17 23:29:04.825653 containerd[1590]: time="2026-04-17T23:29:04.825599680Z" level=error msg="Failed to destroy network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.826255 containerd[1590]: time="2026-04-17T23:29:04.826213062Z" level=error msg="encountered an error cleaning up failed sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.826431 containerd[1590]: time="2026-04-17T23:29:04.826406907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-m9z9b,Uid:cbe78961-645a-4299-9350-a4feb03fb521,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.827084 kubelet[2685]: E0417 23:29:04.826784 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.827084 kubelet[2685]: E0417 23:29:04.826855 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-m9z9b" Apr 17 23:29:04.827084 kubelet[2685]: E0417 23:29:04.826877 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-m9z9b" Apr 17 23:29:04.827226 kubelet[2685]: E0417 23:29:04.826933 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-m9z9b_calico-system(cbe78961-645a-4299-9350-a4feb03fb521)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-m9z9b_calico-system(cbe78961-645a-4299-9350-a4feb03fb521)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-m9z9b" podUID="cbe78961-645a-4299-9350-a4feb03fb521" Apr 17 23:29:04.846186 containerd[1590]: time="2026-04-17T23:29:04.846077703Z" level=error msg="Failed to destroy network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.846649 containerd[1590]: time="2026-04-17T23:29:04.846548612Z" level=error msg="encountered an error cleaning up failed sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.846649 containerd[1590]: time="2026-04-17T23:29:04.846608866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c84bc646f-h6gw8,Uid:27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.847210 kubelet[2685]: E0417 23:29:04.846901 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.847210 kubelet[2685]: E0417 23:29:04.847068 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" Apr 17 23:29:04.847210 kubelet[2685]: E0417 23:29:04.847095 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" Apr 17 23:29:04.847385 kubelet[2685]: E0417 23:29:04.847167 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c84bc646f-h6gw8_calico-system(27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c84bc646f-h6gw8_calico-system(27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" podUID="27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7" Apr 17 23:29:04.853624 containerd[1590]: time="2026-04-17T23:29:04.853305257Z" level=error msg="Failed to destroy network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.853756 containerd[1590]: time="2026-04-17T23:29:04.853713151Z" level=error msg="encountered an error cleaning up failed sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.855087 containerd[1590]: time="2026-04-17T23:29:04.853789969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-lg558,Uid:97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.855314 kubelet[2685]: E0417 23:29:04.854136 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.855314 kubelet[2685]: E0417 23:29:04.854216 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7565646fd6-lg558" Apr 17 23:29:04.855314 kubelet[2685]: E0417 23:29:04.854263 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7565646fd6-lg558" Apr 17 23:29:04.855439 kubelet[2685]: E0417 23:29:04.854452 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7565646fd6-lg558_calico-system(97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7565646fd6-lg558_calico-system(97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7565646fd6-lg558" podUID="97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3" Apr 17 23:29:04.857579 containerd[1590]: time="2026-04-17T23:29:04.857533556Z" level=error msg="Failed to destroy network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.857934 containerd[1590]: time="2026-04-17T23:29:04.857893640Z" level=error msg="encountered an error cleaning up failed sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.858000 containerd[1590]: time="2026-04-17T23:29:04.857950293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-h7t75,Uid:8e2b312e-b94b-42fd-8904-f8f9ac9041dd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.858702 kubelet[2685]: E0417 23:29:04.858387 2685 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:04.858702 kubelet[2685]: E0417 23:29:04.858509 2685 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7565646fd6-h7t75" Apr 17 23:29:04.858702 kubelet[2685]: E0417 23:29:04.858532 2685 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7565646fd6-h7t75" Apr 17 23:29:04.858855 kubelet[2685]: E0417 23:29:04.858658 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7565646fd6-h7t75_calico-system(8e2b312e-b94b-42fd-8904-f8f9ac9041dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7565646fd6-h7t75_calico-system(8e2b312e-b94b-42fd-8904-f8f9ac9041dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7565646fd6-h7t75" podUID="8e2b312e-b94b-42fd-8904-f8f9ac9041dd" Apr 17 23:29:04.899297 kubelet[2685]: I0417 23:29:04.898837 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:04.900201 containerd[1590]: time="2026-04-17T23:29:04.899902169Z" level=info msg="StopPodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\"" Apr 17 23:29:04.900201 containerd[1590]: time="2026-04-17T23:29:04.900168150Z" level=info msg="Ensure that sandbox 97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57 in task-service has been cleanup successfully" Apr 17 23:29:04.903007 kubelet[2685]: I0417 23:29:04.902962 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:04.904735 containerd[1590]: time="2026-04-17T23:29:04.904266540Z" level=info msg="StopPodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\"" Apr 17 23:29:04.907826 containerd[1590]: time="2026-04-17T23:29:04.907387502Z" level=info msg="Ensure that sandbox e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac in task-service has been cleanup successfully" Apr 17 23:29:04.909553 kubelet[2685]: I0417 23:29:04.908688 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:04.911660 containerd[1590]: time="2026-04-17T23:29:04.911611601Z" level=info msg="StopPodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\"" Apr 17 23:29:04.912790 containerd[1590]: time="2026-04-17T23:29:04.911836733Z" level=info msg="Ensure that sandbox a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e in task-service has been cleanup successfully" Apr 17 23:29:04.933315 kubelet[2685]: I0417 23:29:04.931549 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:04.942075 containerd[1590]: time="2026-04-17T23:29:04.940083835Z" level=info msg="StopPodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\"" Apr 17 23:29:04.942075 containerd[1590]: time="2026-04-17T23:29:04.940273599Z" level=info msg="Ensure that sandbox 83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be in task-service has been cleanup successfully" Apr 17 23:29:04.952839 kubelet[2685]: I0417 23:29:04.952802 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:04.955195 containerd[1590]: time="2026-04-17T23:29:04.953687746Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:29:04.955540 containerd[1590]: time="2026-04-17T23:29:04.955512768Z" level=info msg="StopPodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\"" Apr 17 23:29:04.959948 containerd[1590]: time="2026-04-17T23:29:04.959902825Z" level=info msg="Ensure that sandbox bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f in task-service has been cleanup successfully" Apr 17 23:29:04.971199 kubelet[2685]: I0417 23:29:04.970387 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:04.973736 containerd[1590]: time="2026-04-17T23:29:04.973694979Z" level=info msg="StopPodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\"" Apr 17 23:29:04.974833 containerd[1590]: time="2026-04-17T23:29:04.974795354Z" level=info msg="Ensure that sandbox d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801 in task-service has been cleanup successfully" Apr 17 23:29:04.978037 kubelet[2685]: I0417 23:29:04.978008 2685 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:04.995831 containerd[1590]: time="2026-04-17T23:29:04.995786176Z" level=info msg="StopPodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\"" Apr 17 23:29:04.996213 containerd[1590]: time="2026-04-17T23:29:04.996187749Z" level=info msg="Ensure that sandbox 5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8 in task-service has been cleanup successfully" Apr 17 23:29:05.002628 containerd[1590]: time="2026-04-17T23:29:05.002575777Z" level=error msg="StopPodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" failed" error="failed to destroy network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.003333 kubelet[2685]: E0417 23:29:05.003030 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:05.003333 kubelet[2685]: E0417 23:29:05.003147 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57"} Apr 17 23:29:05.003333 kubelet[2685]: E0417 23:29:05.003205 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cbe78961-645a-4299-9350-a4feb03fb521\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.003333 kubelet[2685]: E0417 23:29:05.003229 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cbe78961-645a-4299-9350-a4feb03fb521\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-m9z9b" podUID="cbe78961-645a-4299-9350-a4feb03fb521" Apr 17 23:29:05.021773 containerd[1590]: time="2026-04-17T23:29:05.019761716Z" level=info msg="CreateContainer within sandbox \"512d83db1c1b6362029dec7e5252c2b201f5f0b14b6b7cf413059be07d873648\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"80cbace6ab9179a2ce75f96603396d1ebde582839710a0d7162191faf853964b\"" Apr 17 23:29:05.024837 containerd[1590]: time="2026-04-17T23:29:05.024793166Z" level=info msg="StartContainer for \"80cbace6ab9179a2ce75f96603396d1ebde582839710a0d7162191faf853964b\"" Apr 17 23:29:05.080481 containerd[1590]: time="2026-04-17T23:29:05.080201208Z" level=error msg="StopPodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" failed" error="failed to destroy network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.080942 containerd[1590]: time="2026-04-17T23:29:05.080796582Z" level=error msg="StopPodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" failed" error="failed to destroy network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.081335 kubelet[2685]: E0417 23:29:05.081285 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:05.081477 kubelet[2685]: E0417 23:29:05.081346 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e"} Apr 17 23:29:05.081477 kubelet[2685]: E0417 23:29:05.081384 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c11c886e-9476-40bc-ba5f-51336e501e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.081477 kubelet[2685]: E0417 23:29:05.081461 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c11c886e-9476-40bc-ba5f-51336e501e07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kdlnn" podUID="c11c886e-9476-40bc-ba5f-51336e501e07" Apr 17 23:29:05.082074 kubelet[2685]: E0417 23:29:05.081314 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:05.082074 kubelet[2685]: E0417 23:29:05.081670 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac"} Apr 17 23:29:05.082074 kubelet[2685]: E0417 23:29:05.081699 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.082074 kubelet[2685]: E0417 23:29:05.081720 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7565646fd6-lg558" podUID="97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3" Apr 17 23:29:05.093761 containerd[1590]: time="2026-04-17T23:29:05.093697119Z" level=error msg="StopPodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" failed" error="failed to destroy network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.094789 kubelet[2685]: E0417 23:29:05.094605 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:05.094789 kubelet[2685]: E0417 23:29:05.094671 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be"} Apr 17 23:29:05.094789 kubelet[2685]: E0417 23:29:05.094722 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e2b312e-b94b-42fd-8904-f8f9ac9041dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.094789 kubelet[2685]: E0417 23:29:05.094751 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e2b312e-b94b-42fd-8904-f8f9ac9041dd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7565646fd6-h7t75" podUID="8e2b312e-b94b-42fd-8904-f8f9ac9041dd" Apr 17 23:29:05.111468 containerd[1590]: time="2026-04-17T23:29:05.110849490Z" level=error msg="StopPodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" failed" error="failed to destroy network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.111621 kubelet[2685]: E0417 23:29:05.111130 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:05.111621 kubelet[2685]: E0417 23:29:05.111194 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801"} Apr 17 23:29:05.111621 kubelet[2685]: E0417 23:29:05.111228 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c266dc28-a296-4595-9d40-3d6c60f90528\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.111621 kubelet[2685]: E0417 23:29:05.111251 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c266dc28-a296-4595-9d40-3d6c60f90528\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54865d9c69-vrxcx" podUID="c266dc28-a296-4595-9d40-3d6c60f90528" Apr 17 23:29:05.120527 containerd[1590]: time="2026-04-17T23:29:05.120211632Z" level=error msg="StopPodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" failed" error="failed to destroy network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.121004 kubelet[2685]: E0417 23:29:05.120737 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:05.121004 kubelet[2685]: E0417 23:29:05.120801 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8"} Apr 17 23:29:05.121004 kubelet[2685]: E0417 23:29:05.120835 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8d00a3a-510e-455b-af6d-68cf933b1622\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.121004 kubelet[2685]: E0417 23:29:05.120861 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8d00a3a-510e-455b-af6d-68cf933b1622\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-frnhq" podUID="c8d00a3a-510e-455b-af6d-68cf933b1622" Apr 17 23:29:05.123106 containerd[1590]: time="2026-04-17T23:29:05.122725397Z" level=error msg="StopPodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" failed" error="failed to destroy network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:29:05.123234 kubelet[2685]: E0417 23:29:05.123147 2685 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:05.123234 kubelet[2685]: E0417 23:29:05.123201 2685 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f"} Apr 17 23:29:05.123324 kubelet[2685]: E0417 23:29:05.123241 2685 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:29:05.123324 kubelet[2685]: E0417 23:29:05.123264 2685 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" podUID="27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7" Apr 17 23:29:05.168101 containerd[1590]: time="2026-04-17T23:29:05.167939109Z" level=info msg="StartContainer for \"80cbace6ab9179a2ce75f96603396d1ebde582839710a0d7162191faf853964b\" returns successfully" Apr 17 23:29:05.739603 containerd[1590]: time="2026-04-17T23:29:05.739028226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-csfn9,Uid:c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:05.957730 systemd-networkd[1234]: cali642fc4ebfbd: Link UP Apr 17 23:29:05.960881 systemd-networkd[1234]: cali642fc4ebfbd: Gained carrier Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.795 [ERROR][3864] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.829 [INFO][3864] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0 csi-node-driver- calico-system c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d 744 0 2026-04-17 23:28:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 csi-node-driver-csfn9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali642fc4ebfbd [] [] }} ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.829 [INFO][3864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.884 [INFO][3875] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" HandleID="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.900 [INFO][3875] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" HandleID="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f9dd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"csi-node-driver-csfn9", "timestamp":"2026-04-17 23:29:05.884596353 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005314a0)} Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.900 [INFO][3875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.900 [INFO][3875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.900 [INFO][3875] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.904 [INFO][3875] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.910 [INFO][3875] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.917 [INFO][3875] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.921 [INFO][3875] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.924 [INFO][3875] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.924 [INFO][3875] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.927 [INFO][3875] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844 Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.932 [INFO][3875] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.939 [INFO][3875] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.193/26] block=192.168.124.192/26 handle="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.939 [INFO][3875] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.193/26] handle="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.940 [INFO][3875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:05.978406 containerd[1590]: 2026-04-17 23:29:05.940 [INFO][3875] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.193/26] IPv6=[] ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" HandleID="k8s-pod-network.91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.980604 containerd[1590]: 2026-04-17 23:29:05.945 [INFO][3864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"csi-node-driver-csfn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali642fc4ebfbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:05.980604 containerd[1590]: 2026-04-17 23:29:05.945 [INFO][3864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.193/32] ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.980604 containerd[1590]: 2026-04-17 23:29:05.945 [INFO][3864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali642fc4ebfbd ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.980604 containerd[1590]: 2026-04-17 23:29:05.958 [INFO][3864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.980604 containerd[1590]: 2026-04-17 23:29:05.958 [INFO][3864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d", ResourceVersion:"744", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844", Pod:"csi-node-driver-csfn9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali642fc4ebfbd", MAC:"fe:f4:91:f4:8e:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:05.980604 containerd[1590]: 2026-04-17 23:29:05.972 [INFO][3864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844" Namespace="calico-system" Pod="csi-node-driver-csfn9" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-csi--node--driver--csfn9-eth0" Apr 17 23:29:05.999762 containerd[1590]: time="2026-04-17T23:29:05.996677121Z" level=info msg="StopPodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\"" Apr 17 23:29:06.008627 containerd[1590]: time="2026-04-17T23:29:06.008452836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:06.008827 containerd[1590]: time="2026-04-17T23:29:06.008791750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:06.008954 containerd[1590]: time="2026-04-17T23:29:06.008929260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:06.009297 containerd[1590]: time="2026-04-17T23:29:06.009258931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:06.062283 containerd[1590]: time="2026-04-17T23:29:06.062091245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-csfn9,Uid:c2a1ae04-8fae-43c4-9fd4-f9fc09ef8b7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844\"" Apr 17 23:29:06.065797 containerd[1590]: time="2026-04-17T23:29:06.065702793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:29:06.104171 kubelet[2685]: I0417 23:29:06.103772 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5fn97" podStartSLOduration=3.9518091269999998 podStartE2EDuration="17.103749684s" podCreationTimestamp="2026-04-17 23:28:49 +0000 UTC" firstStartedPulling="2026-04-17 23:28:50.278370918 +0000 UTC m=+23.703126254" lastFinishedPulling="2026-04-17 23:29:03.430311515 +0000 UTC m=+36.855066811" observedRunningTime="2026-04-17 23:29:06.037109321 +0000 UTC m=+39.461864617" watchObservedRunningTime="2026-04-17 23:29:06.103749684 +0000 UTC m=+39.528504980" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.106 [INFO][3926] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.106 [INFO][3926] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" iface="eth0" netns="/var/run/netns/cni-6f66b067-6f9b-8e3f-62a1-db7cd6626c48" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.106 [INFO][3926] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" iface="eth0" netns="/var/run/netns/cni-6f66b067-6f9b-8e3f-62a1-db7cd6626c48" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.107 [INFO][3926] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" iface="eth0" netns="/var/run/netns/cni-6f66b067-6f9b-8e3f-62a1-db7cd6626c48" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.107 [INFO][3926] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.107 [INFO][3926] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.129 [INFO][3948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.129 [INFO][3948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.129 [INFO][3948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.144 [WARNING][3948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.144 [INFO][3948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.146 [INFO][3948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:06.151169 containerd[1590]: 2026-04-17 23:29:06.148 [INFO][3926] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:06.154182 containerd[1590]: time="2026-04-17T23:29:06.151819080Z" level=info msg="TearDown network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" successfully" Apr 17 23:29:06.154182 containerd[1590]: time="2026-04-17T23:29:06.151867131Z" level=info msg="StopPodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" returns successfully" Apr 17 23:29:06.155198 systemd[1]: run-netns-cni\x2d6f66b067\x2d6f9b\x2d8e3f\x2d62a1\x2ddb7cd6626c48.mount: Deactivated successfully. Apr 17 23:29:06.236441 kubelet[2685]: I0417 23:29:06.235957 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn6h7\" (UniqueName: \"kubernetes.io/projected/c266dc28-a296-4595-9d40-3d6c60f90528-kube-api-access-nn6h7\") pod \"c266dc28-a296-4595-9d40-3d6c60f90528\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " Apr 17 23:29:06.236441 kubelet[2685]: I0417 23:29:06.236075 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-backend-key-pair\") pod \"c266dc28-a296-4595-9d40-3d6c60f90528\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " Apr 17 23:29:06.236441 kubelet[2685]: I0417 23:29:06.236139 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-nginx-config\") pod \"c266dc28-a296-4595-9d40-3d6c60f90528\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " Apr 17 23:29:06.236441 kubelet[2685]: I0417 23:29:06.236183 2685 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-ca-bundle\") pod \"c266dc28-a296-4595-9d40-3d6c60f90528\" (UID: \"c266dc28-a296-4595-9d40-3d6c60f90528\") " Apr 17 23:29:06.237984 kubelet[2685]: I0417 23:29:06.237125 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c266dc28-a296-4595-9d40-3d6c60f90528" (UID: "c266dc28-a296-4595-9d40-3d6c60f90528"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:29:06.239466 kubelet[2685]: I0417 23:29:06.239402 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "c266dc28-a296-4595-9d40-3d6c60f90528" (UID: "c266dc28-a296-4595-9d40-3d6c60f90528"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:29:06.240831 kubelet[2685]: I0417 23:29:06.240793 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c266dc28-a296-4595-9d40-3d6c60f90528-kube-api-access-nn6h7" (OuterVolumeSpecName: "kube-api-access-nn6h7") pod "c266dc28-a296-4595-9d40-3d6c60f90528" (UID: "c266dc28-a296-4595-9d40-3d6c60f90528"). InnerVolumeSpecName "kube-api-access-nn6h7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:29:06.241890 kubelet[2685]: I0417 23:29:06.241844 2685 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c266dc28-a296-4595-9d40-3d6c60f90528" (UID: "c266dc28-a296-4595-9d40-3d6c60f90528"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:29:06.337413 kubelet[2685]: I0417 23:29:06.336631 2685 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-nginx-config\") on node \"ci-4081-3-6-n-9c3210a1b0\" DevicePath \"\"" Apr 17 23:29:06.337413 kubelet[2685]: I0417 23:29:06.337276 2685 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-ca-bundle\") on node \"ci-4081-3-6-n-9c3210a1b0\" DevicePath \"\"" Apr 17 23:29:06.337413 kubelet[2685]: I0417 23:29:06.337318 2685 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nn6h7\" (UniqueName: \"kubernetes.io/projected/c266dc28-a296-4595-9d40-3d6c60f90528-kube-api-access-nn6h7\") on node \"ci-4081-3-6-n-9c3210a1b0\" DevicePath \"\"" Apr 17 23:29:06.337413 kubelet[2685]: I0417 23:29:06.337349 2685 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c266dc28-a296-4595-9d40-3d6c60f90528-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-9c3210a1b0\" DevicePath \"\"" Apr 17 23:29:06.447314 systemd[1]: var-lib-kubelet-pods-c266dc28\x2da296\x2d4595\x2d9d40\x2d3d6c60f90528-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnn6h7.mount: Deactivated successfully. Apr 17 23:29:06.447715 systemd[1]: var-lib-kubelet-pods-c266dc28\x2da296\x2d4595\x2d9d40\x2d3d6c60f90528-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:29:07.000834 kubelet[2685]: I0417 23:29:06.999515 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:07.146276 kubelet[2685]: I0417 23:29:07.145980 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/61fbcce8-34da-4e12-a3fe-0e0375991345-nginx-config\") pod \"whisker-86bfdb65cd-2qd9m\" (UID: \"61fbcce8-34da-4e12-a3fe-0e0375991345\") " pod="calico-system/whisker-86bfdb65cd-2qd9m" Apr 17 23:29:07.146276 kubelet[2685]: I0417 23:29:07.146044 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/61fbcce8-34da-4e12-a3fe-0e0375991345-whisker-backend-key-pair\") pod \"whisker-86bfdb65cd-2qd9m\" (UID: \"61fbcce8-34da-4e12-a3fe-0e0375991345\") " pod="calico-system/whisker-86bfdb65cd-2qd9m" Apr 17 23:29:07.146276 kubelet[2685]: I0417 23:29:07.146111 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crlw\" (UniqueName: \"kubernetes.io/projected/61fbcce8-34da-4e12-a3fe-0e0375991345-kube-api-access-5crlw\") pod \"whisker-86bfdb65cd-2qd9m\" (UID: \"61fbcce8-34da-4e12-a3fe-0e0375991345\") " pod="calico-system/whisker-86bfdb65cd-2qd9m" Apr 17 23:29:07.146276 kubelet[2685]: I0417 23:29:07.146140 2685 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61fbcce8-34da-4e12-a3fe-0e0375991345-whisker-ca-bundle\") pod \"whisker-86bfdb65cd-2qd9m\" (UID: \"61fbcce8-34da-4e12-a3fe-0e0375991345\") " pod="calico-system/whisker-86bfdb65cd-2qd9m" Apr 17 23:29:07.411581 containerd[1590]: time="2026-04-17T23:29:07.411363195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86bfdb65cd-2qd9m,Uid:61fbcce8-34da-4e12-a3fe-0e0375991345,Namespace:calico-system,Attempt:0,}" Apr 17 23:29:07.616131 systemd-networkd[1234]: calidead468a27c: Link UP Apr 17 23:29:07.619529 systemd-networkd[1234]: calidead468a27c: Gained carrier Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.468 [ERROR][4055] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.490 [INFO][4055] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0 whisker-86bfdb65cd- calico-system 61fbcce8-34da-4e12-a3fe-0e0375991345 935 0 2026-04-17 23:29:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86bfdb65cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 whisker-86bfdb65cd-2qd9m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidead468a27c [] [] }} ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.490 [INFO][4055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.530 [INFO][4067] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" HandleID="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.542 [INFO][4067] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" HandleID="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"whisker-86bfdb65cd-2qd9m", "timestamp":"2026-04-17 23:29:07.530960357 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000352f20)} Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.542 [INFO][4067] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.543 [INFO][4067] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.543 [INFO][4067] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.546 [INFO][4067] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.555 [INFO][4067] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.564 [INFO][4067] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.570 [INFO][4067] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.574 [INFO][4067] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.574 [INFO][4067] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.577 [INFO][4067] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.585 [INFO][4067] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.599 [INFO][4067] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.194/26] block=192.168.124.192/26 handle="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.599 [INFO][4067] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.194/26] handle="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.599 [INFO][4067] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:07.668641 containerd[1590]: 2026-04-17 23:29:07.599 [INFO][4067] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.194/26] IPv6=[] ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" HandleID="k8s-pod-network.6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.670241 containerd[1590]: 2026-04-17 23:29:07.607 [INFO][4055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0", GenerateName:"whisker-86bfdb65cd-", Namespace:"calico-system", SelfLink:"", UID:"61fbcce8-34da-4e12-a3fe-0e0375991345", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86bfdb65cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"whisker-86bfdb65cd-2qd9m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidead468a27c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:07.670241 containerd[1590]: 2026-04-17 23:29:07.607 [INFO][4055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.194/32] ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.670241 containerd[1590]: 2026-04-17 23:29:07.608 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidead468a27c ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.670241 containerd[1590]: 2026-04-17 23:29:07.628 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.670241 containerd[1590]: 2026-04-17 23:29:07.633 [INFO][4055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0", GenerateName:"whisker-86bfdb65cd-", Namespace:"calico-system", SelfLink:"", UID:"61fbcce8-34da-4e12-a3fe-0e0375991345", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86bfdb65cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a", Pod:"whisker-86bfdb65cd-2qd9m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidead468a27c", MAC:"f2:8d:e5:47:93:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:07.670241 containerd[1590]: 2026-04-17 23:29:07.655 [INFO][4055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a" Namespace="calico-system" Pod="whisker-86bfdb65cd-2qd9m" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--86bfdb65cd--2qd9m-eth0" Apr 17 23:29:07.712907 containerd[1590]: time="2026-04-17T23:29:07.712579012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:07.712907 containerd[1590]: time="2026-04-17T23:29:07.712651188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:07.712907 containerd[1590]: time="2026-04-17T23:29:07.712667111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:07.712907 containerd[1590]: time="2026-04-17T23:29:07.712766652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:07.751089 containerd[1590]: time="2026-04-17T23:29:07.750670358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:07.752157 containerd[1590]: time="2026-04-17T23:29:07.752119545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 17 23:29:07.753711 containerd[1590]: time="2026-04-17T23:29:07.753669513Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:07.777102 containerd[1590]: time="2026-04-17T23:29:07.758163224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:07.777897 containerd[1590]: time="2026-04-17T23:29:07.760390376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.694634011s" Apr 17 23:29:07.777897 containerd[1590]: time="2026-04-17T23:29:07.777708643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 17 23:29:07.788203 containerd[1590]: time="2026-04-17T23:29:07.788154894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86bfdb65cd-2qd9m,Uid:61fbcce8-34da-4e12-a3fe-0e0375991345,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a\"" Apr 17 23:29:07.789458 containerd[1590]: time="2026-04-17T23:29:07.789422203Z" level=info msg="CreateContainer within sandbox \"91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:29:07.793703 containerd[1590]: time="2026-04-17T23:29:07.793595726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:29:07.799315 systemd-networkd[1234]: cali642fc4ebfbd: Gained IPv6LL Apr 17 23:29:07.813524 containerd[1590]: time="2026-04-17T23:29:07.813482137Z" level=info msg="CreateContainer within sandbox \"91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d2bd3cc2112e1baca7356e9c3d84fed8b1235658e125345a233d2d0101fd8f66\"" Apr 17 23:29:07.816127 containerd[1590]: time="2026-04-17T23:29:07.815170495Z" level=info msg="StartContainer for \"d2bd3cc2112e1baca7356e9c3d84fed8b1235658e125345a233d2d0101fd8f66\"" Apr 17 23:29:07.902297 containerd[1590]: time="2026-04-17T23:29:07.902172356Z" level=info msg="StartContainer for \"d2bd3cc2112e1baca7356e9c3d84fed8b1235658e125345a233d2d0101fd8f66\" returns successfully" Apr 17 23:29:08.696376 systemd-networkd[1234]: calidead468a27c: Gained IPv6LL Apr 17 23:29:08.725244 systemd[1]: run-containerd-runc-k8s.io-d2bd3cc2112e1baca7356e9c3d84fed8b1235658e125345a233d2d0101fd8f66-runc.SzOfUf.mount: Deactivated successfully. Apr 17 23:29:08.740497 kubelet[2685]: I0417 23:29:08.740195 2685 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c266dc28-a296-4595-9d40-3d6c60f90528" path="/var/lib/kubelet/pods/c266dc28-a296-4595-9d40-3d6c60f90528/volumes" Apr 17 23:29:09.273179 containerd[1590]: time="2026-04-17T23:29:09.273117998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:09.278320 containerd[1590]: time="2026-04-17T23:29:09.278196096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 17 23:29:09.281072 containerd[1590]: time="2026-04-17T23:29:09.279781974Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:09.284507 containerd[1590]: time="2026-04-17T23:29:09.284411182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.490524074s" Apr 17 23:29:09.284507 containerd[1590]: time="2026-04-17T23:29:09.284505481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 17 23:29:09.284717 containerd[1590]: time="2026-04-17T23:29:09.284678996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:09.287346 containerd[1590]: time="2026-04-17T23:29:09.287247151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:29:09.297714 containerd[1590]: time="2026-04-17T23:29:09.297669720Z" level=info msg="CreateContainer within sandbox \"6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:29:09.321667 containerd[1590]: time="2026-04-17T23:29:09.321598117Z" level=info msg="CreateContainer within sandbox \"6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"77b146ee07b1a1534bfc8ee140a93f3ef2c265dfc7a6f083ab20b7c77b00e61c\"" Apr 17 23:29:09.322599 containerd[1590]: time="2026-04-17T23:29:09.322544387Z" level=info msg="StartContainer for \"77b146ee07b1a1534bfc8ee140a93f3ef2c265dfc7a6f083ab20b7c77b00e61c\"" Apr 17 23:29:09.404983 containerd[1590]: time="2026-04-17T23:29:09.404317420Z" level=info msg="StartContainer for \"77b146ee07b1a1534bfc8ee140a93f3ef2c265dfc7a6f083ab20b7c77b00e61c\" returns successfully" Apr 17 23:29:09.986477 kubelet[2685]: I0417 23:29:09.986430 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:10.441091 kernel: calico-node[4253]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:29:10.938688 systemd-networkd[1234]: vxlan.calico: Link UP Apr 17 23:29:10.938702 systemd-networkd[1234]: vxlan.calico: Gained carrier Apr 17 23:29:11.081335 containerd[1590]: time="2026-04-17T23:29:11.080443586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:11.083010 containerd[1590]: time="2026-04-17T23:29:11.082964586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 17 23:29:11.083921 containerd[1590]: time="2026-04-17T23:29:11.083872119Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:11.089104 containerd[1590]: time="2026-04-17T23:29:11.087712491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:11.089104 containerd[1590]: time="2026-04-17T23:29:11.088629386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.801332665s" Apr 17 23:29:11.089104 containerd[1590]: time="2026-04-17T23:29:11.088671674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 17 23:29:11.090838 containerd[1590]: time="2026-04-17T23:29:11.090790998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:29:11.096617 containerd[1590]: time="2026-04-17T23:29:11.096576740Z" level=info msg="CreateContainer within sandbox \"91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:29:11.112380 kubelet[2685]: I0417 23:29:11.112276 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:11.146232 containerd[1590]: time="2026-04-17T23:29:11.144162809Z" level=info msg="CreateContainer within sandbox \"91f67221ce108445e420063e5441bf1f7db0a37e2795fd7817b4412e3abce844\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f94b086c96e39cb32ff9d9975c94e720966fe97b3d348519ba16ed26382aa61c\"" Apr 17 23:29:11.150359 containerd[1590]: time="2026-04-17T23:29:11.147043958Z" level=info msg="StartContainer for \"f94b086c96e39cb32ff9d9975c94e720966fe97b3d348519ba16ed26382aa61c\"" Apr 17 23:29:11.239923 systemd[1]: run-containerd-runc-k8s.io-f94b086c96e39cb32ff9d9975c94e720966fe97b3d348519ba16ed26382aa61c-runc.P6dE0o.mount: Deactivated successfully. Apr 17 23:29:11.333404 containerd[1590]: time="2026-04-17T23:29:11.333247084Z" level=info msg="StartContainer for \"f94b086c96e39cb32ff9d9975c94e720966fe97b3d348519ba16ed26382aa61c\" returns successfully" Apr 17 23:29:11.837813 kubelet[2685]: I0417 23:29:11.837473 2685 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:29:11.837813 kubelet[2685]: I0417 23:29:11.837534 2685 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:29:12.535276 systemd-networkd[1234]: vxlan.calico: Gained IPv6LL Apr 17 23:29:13.055785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533778996.mount: Deactivated successfully. Apr 17 23:29:13.078093 containerd[1590]: time="2026-04-17T23:29:13.077989989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:13.080290 containerd[1590]: time="2026-04-17T23:29:13.080045002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 17 23:29:13.082259 containerd[1590]: time="2026-04-17T23:29:13.081807763Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:13.085831 containerd[1590]: time="2026-04-17T23:29:13.085748120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:13.086532 containerd[1590]: time="2026-04-17T23:29:13.086491695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.9953984s" Apr 17 23:29:13.086603 containerd[1590]: time="2026-04-17T23:29:13.086532902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 17 23:29:13.095306 containerd[1590]: time="2026-04-17T23:29:13.095256409Z" level=info msg="CreateContainer within sandbox \"6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:29:13.118243 containerd[1590]: time="2026-04-17T23:29:13.118065597Z" level=info msg="CreateContainer within sandbox \"6ebe553c272b8f5d3fc0521b1772301bb6384a1bf861e9b96a1ea4fe8d47672a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"416750a411495fb580d045c3eb24df0fc814876068926d76f7bf4b4ad3b6fd4f\"" Apr 17 23:29:13.120955 containerd[1590]: time="2026-04-17T23:29:13.120577494Z" level=info msg="StartContainer for \"416750a411495fb580d045c3eb24df0fc814876068926d76f7bf4b4ad3b6fd4f\"" Apr 17 23:29:13.163744 systemd[1]: run-containerd-runc-k8s.io-416750a411495fb580d045c3eb24df0fc814876068926d76f7bf4b4ad3b6fd4f-runc.MMwbc1.mount: Deactivated successfully. Apr 17 23:29:13.201592 containerd[1590]: time="2026-04-17T23:29:13.201444042Z" level=info msg="StartContainer for \"416750a411495fb580d045c3eb24df0fc814876068926d76f7bf4b4ad3b6fd4f\" returns successfully" Apr 17 23:29:14.055148 kubelet[2685]: I0417 23:29:14.054716 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-csfn9" podStartSLOduration=20.029489764 podStartE2EDuration="25.054695051s" podCreationTimestamp="2026-04-17 23:28:49 +0000 UTC" firstStartedPulling="2026-04-17 23:29:06.065009601 +0000 UTC m=+39.489764897" lastFinishedPulling="2026-04-17 23:29:11.090214888 +0000 UTC m=+44.514970184" observedRunningTime="2026-04-17 23:29:12.048137833 +0000 UTC m=+45.472893129" watchObservedRunningTime="2026-04-17 23:29:14.054695051 +0000 UTC m=+47.479450387" Apr 17 23:29:14.058127 kubelet[2685]: I0417 23:29:14.057633 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86bfdb65cd-2qd9m" podStartSLOduration=1.7614129379999999 podStartE2EDuration="7.05760981s" podCreationTimestamp="2026-04-17 23:29:07 +0000 UTC" firstStartedPulling="2026-04-17 23:29:07.791477278 +0000 UTC m=+41.216232574" lastFinishedPulling="2026-04-17 23:29:13.08767419 +0000 UTC m=+46.512429446" observedRunningTime="2026-04-17 23:29:14.050465739 +0000 UTC m=+47.475221075" watchObservedRunningTime="2026-04-17 23:29:14.05760981 +0000 UTC m=+47.482365106" Apr 17 23:29:15.736165 containerd[1590]: time="2026-04-17T23:29:15.735792294Z" level=info msg="StopPodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\"" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.811 [INFO][4524] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.812 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" iface="eth0" netns="/var/run/netns/cni-9ca8c6d4-f0ec-cde5-5b66-0315211a28f7" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.813 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" iface="eth0" netns="/var/run/netns/cni-9ca8c6d4-f0ec-cde5-5b66-0315211a28f7" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.813 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" iface="eth0" netns="/var/run/netns/cni-9ca8c6d4-f0ec-cde5-5b66-0315211a28f7" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.813 [INFO][4524] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.813 [INFO][4524] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.838 [INFO][4531] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.838 [INFO][4531] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.838 [INFO][4531] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.848 [WARNING][4531] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.848 [INFO][4531] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.851 [INFO][4531] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:15.856905 containerd[1590]: 2026-04-17 23:29:15.853 [INFO][4524] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:15.859766 containerd[1590]: time="2026-04-17T23:29:15.859599024Z" level=info msg="TearDown network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" successfully" Apr 17 23:29:15.859766 containerd[1590]: time="2026-04-17T23:29:15.859650353Z" level=info msg="StopPodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" returns successfully" Apr 17 23:29:15.859676 systemd[1]: run-netns-cni\x2d9ca8c6d4\x2df0ec\x2dcde5\x2d5b66\x2d0315211a28f7.mount: Deactivated successfully. Apr 17 23:29:15.862355 containerd[1590]: time="2026-04-17T23:29:15.862303415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-lg558,Uid:97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3,Namespace:calico-system,Attempt:1,}" Apr 17 23:29:16.025721 systemd-networkd[1234]: cali87eb23c0096: Link UP Apr 17 23:29:16.028152 systemd-networkd[1234]: cali87eb23c0096: Gained carrier Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.926 [INFO][4539] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0 calico-apiserver-7565646fd6- calico-system 97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3 990 0 2026-04-17 23:28:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7565646fd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 calico-apiserver-7565646fd6-lg558 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali87eb23c0096 [] [] }} ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.926 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.956 [INFO][4550] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" HandleID="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.969 [INFO][4550] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" HandleID="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"calico-apiserver-7565646fd6-lg558", "timestamp":"2026-04-17 23:29:15.956694861 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000245080)} Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.969 [INFO][4550] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.969 [INFO][4550] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.970 [INFO][4550] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.973 [INFO][4550] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.982 [INFO][4550] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.988 [INFO][4550] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.991 [INFO][4550] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.995 [INFO][4550] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.995 [INFO][4550] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:15.997 [INFO][4550] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31 Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:16.004 [INFO][4550] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:16.014 [INFO][4550] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.195/26] block=192.168.124.192/26 handle="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:16.014 [INFO][4550] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.195/26] handle="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:16.014 [INFO][4550] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:16.063122 containerd[1590]: 2026-04-17 23:29:16.014 [INFO][4550] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.195/26] IPv6=[] ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" HandleID="k8s-pod-network.0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.064946 containerd[1590]: 2026-04-17 23:29:16.020 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"calico-apiserver-7565646fd6-lg558", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87eb23c0096", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.064946 containerd[1590]: 2026-04-17 23:29:16.020 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.195/32] ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.064946 containerd[1590]: 2026-04-17 23:29:16.020 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87eb23c0096 ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.064946 containerd[1590]: 2026-04-17 23:29:16.027 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.064946 containerd[1590]: 2026-04-17 23:29:16.035 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31", Pod:"calico-apiserver-7565646fd6-lg558", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87eb23c0096", MAC:"6e:a9:7a:fe:27:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:16.064946 containerd[1590]: 2026-04-17 23:29:16.056 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-lg558" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:16.099109 containerd[1590]: time="2026-04-17T23:29:16.098832807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:16.099109 containerd[1590]: time="2026-04-17T23:29:16.098909100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:16.099109 containerd[1590]: time="2026-04-17T23:29:16.098925142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:16.099109 containerd[1590]: time="2026-04-17T23:29:16.099039242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:16.155399 containerd[1590]: time="2026-04-17T23:29:16.155190150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-lg558,Uid:97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31\"" Apr 17 23:29:16.169785 containerd[1590]: time="2026-04-17T23:29:16.169509035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:29:17.745814 containerd[1590]: time="2026-04-17T23:29:17.745716447Z" level=info msg="StopPodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\"" Apr 17 23:29:17.746443 containerd[1590]: time="2026-04-17T23:29:17.746419605Z" level=info msg="StopPodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\"" Apr 17 23:29:17.911915 systemd-networkd[1234]: cali87eb23c0096: Gained IPv6LL Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.852 [INFO][4649] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.853 [INFO][4649] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" iface="eth0" netns="/var/run/netns/cni-a66171bb-ac35-0565-71ab-0be7f45d770f" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.853 [INFO][4649] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" iface="eth0" netns="/var/run/netns/cni-a66171bb-ac35-0565-71ab-0be7f45d770f" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.853 [INFO][4649] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" iface="eth0" netns="/var/run/netns/cni-a66171bb-ac35-0565-71ab-0be7f45d770f" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.853 [INFO][4649] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.853 [INFO][4649] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.892 [INFO][4668] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.895 [INFO][4668] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.895 [INFO][4668] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.905 [WARNING][4668] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.905 [INFO][4668] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.907 [INFO][4668] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:17.921076 containerd[1590]: 2026-04-17 23:29:17.910 [INFO][4649] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:17.926585 systemd[1]: run-netns-cni\x2da66171bb\x2dac35\x2d0565\x2d71ab\x2d0be7f45d770f.mount: Deactivated successfully. Apr 17 23:29:17.930411 containerd[1590]: time="2026-04-17T23:29:17.928977625Z" level=info msg="TearDown network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" successfully" Apr 17 23:29:17.930411 containerd[1590]: time="2026-04-17T23:29:17.929027073Z" level=info msg="StopPodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" returns successfully" Apr 17 23:29:17.948256 containerd[1590]: time="2026-04-17T23:29:17.948180482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-h7t75,Uid:8e2b312e-b94b-42fd-8904-f8f9ac9041dd,Namespace:calico-system,Attempt:1,}" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.854 [INFO][4657] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.854 [INFO][4657] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" iface="eth0" netns="/var/run/netns/cni-110f75f2-467f-7b2c-60b3-b7b1c586abf1" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.856 [INFO][4657] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" iface="eth0" netns="/var/run/netns/cni-110f75f2-467f-7b2c-60b3-b7b1c586abf1" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.856 [INFO][4657] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" iface="eth0" netns="/var/run/netns/cni-110f75f2-467f-7b2c-60b3-b7b1c586abf1" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.856 [INFO][4657] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.857 [INFO][4657] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.967 [INFO][4673] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.967 [INFO][4673] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.967 [INFO][4673] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.988 [WARNING][4673] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.989 [INFO][4673] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.993 [INFO][4673] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:18.001734 containerd[1590]: 2026-04-17 23:29:17.999 [INFO][4657] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:18.006732 containerd[1590]: time="2026-04-17T23:29:18.006152257Z" level=info msg="TearDown network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" successfully" Apr 17 23:29:18.006732 containerd[1590]: time="2026-04-17T23:29:18.006194504Z" level=info msg="StopPodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" returns successfully" Apr 17 23:29:18.007793 systemd[1]: run-netns-cni\x2d110f75f2\x2d467f\x2d7b2c\x2d60b3\x2db7b1c586abf1.mount: Deactivated successfully. Apr 17 23:29:18.011587 containerd[1590]: time="2026-04-17T23:29:18.011083628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-m9z9b,Uid:cbe78961-645a-4299-9350-a4feb03fb521,Namespace:calico-system,Attempt:1,}" Apr 17 23:29:18.267578 systemd-networkd[1234]: calic181cfb3f7c: Link UP Apr 17 23:29:18.268558 systemd-networkd[1234]: calic181cfb3f7c: Gained carrier Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.128 [INFO][4694] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0 goldmane-5b85766d88- calico-system cbe78961-645a-4299-9350-a4feb03fb521 1005 0 2026-04-17 23:28:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 goldmane-5b85766d88-m9z9b eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic181cfb3f7c [] [] }} ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.128 [INFO][4694] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.178 [INFO][4716] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" HandleID="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.195 [INFO][4716] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" HandleID="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"goldmane-5b85766d88-m9z9b", "timestamp":"2026-04-17 23:29:18.178941594 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000411600)} Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.195 [INFO][4716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.195 [INFO][4716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.195 [INFO][4716] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.199 [INFO][4716] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.213 [INFO][4716] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.221 [INFO][4716] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.225 [INFO][4716] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.229 [INFO][4716] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.229 [INFO][4716] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.233 [INFO][4716] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.243 [INFO][4716] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.256 [INFO][4716] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.196/26] block=192.168.124.192/26 handle="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.258 [INFO][4716] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.196/26] handle="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.258 [INFO][4716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:18.298401 containerd[1590]: 2026-04-17 23:29:18.258 [INFO][4716] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.196/26] IPv6=[] ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" HandleID="k8s-pod-network.528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.300147 containerd[1590]: 2026-04-17 23:29:18.261 [INFO][4694] cni-plugin/k8s.go 418: Populated endpoint ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cbe78961-645a-4299-9350-a4feb03fb521", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"goldmane-5b85766d88-m9z9b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic181cfb3f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:18.300147 containerd[1590]: 2026-04-17 23:29:18.262 [INFO][4694] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.196/32] ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.300147 containerd[1590]: 2026-04-17 23:29:18.262 [INFO][4694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic181cfb3f7c ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.300147 containerd[1590]: 2026-04-17 23:29:18.268 [INFO][4694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.300147 containerd[1590]: 2026-04-17 23:29:18.273 [INFO][4694] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cbe78961-645a-4299-9350-a4feb03fb521", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a", Pod:"goldmane-5b85766d88-m9z9b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic181cfb3f7c", MAC:"2e:d5:d4:ba:68:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:18.300147 containerd[1590]: 2026-04-17 23:29:18.294 [INFO][4694] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a" Namespace="calico-system" Pod="goldmane-5b85766d88-m9z9b" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:18.360779 containerd[1590]: time="2026-04-17T23:29:18.357491438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:18.360779 containerd[1590]: time="2026-04-17T23:29:18.357610457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:18.360779 containerd[1590]: time="2026-04-17T23:29:18.357623940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:18.362472 containerd[1590]: time="2026-04-17T23:29:18.361337550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:18.401138 systemd-networkd[1234]: calia47afc8fdfb: Link UP Apr 17 23:29:18.403581 systemd-networkd[1234]: calia47afc8fdfb: Gained carrier Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.100 [INFO][4685] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0 calico-apiserver-7565646fd6- calico-system 8e2b312e-b94b-42fd-8904-f8f9ac9041dd 1004 0 2026-04-17 23:28:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7565646fd6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 calico-apiserver-7565646fd6-h7t75 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia47afc8fdfb [] [] }} ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.100 [INFO][4685] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.192 [INFO][4710] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" HandleID="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.214 [INFO][4710] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" HandleID="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000416040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"calico-apiserver-7565646fd6-h7t75", "timestamp":"2026-04-17 23:29:18.192288509 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000eadc0)} Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.214 [INFO][4710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.258 [INFO][4710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.258 [INFO][4710] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.301 [INFO][4710] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.314 [INFO][4710] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.326 [INFO][4710] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.331 [INFO][4710] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.337 [INFO][4710] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.337 [INFO][4710] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.349 [INFO][4710] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214 Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.364 [INFO][4710] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.382 [INFO][4710] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.197/26] block=192.168.124.192/26 handle="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.383 [INFO][4710] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.197/26] handle="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.383 [INFO][4710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:18.444352 containerd[1590]: 2026-04-17 23:29:18.383 [INFO][4710] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.197/26] IPv6=[] ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" HandleID="k8s-pod-network.846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.444962 containerd[1590]: 2026-04-17 23:29:18.393 [INFO][4685] cni-plugin/k8s.go 418: Populated endpoint ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"8e2b312e-b94b-42fd-8904-f8f9ac9041dd", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"calico-apiserver-7565646fd6-h7t75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia47afc8fdfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:18.444962 containerd[1590]: 2026-04-17 23:29:18.393 [INFO][4685] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.197/32] ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.444962 containerd[1590]: 2026-04-17 23:29:18.393 [INFO][4685] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia47afc8fdfb ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.444962 containerd[1590]: 2026-04-17 23:29:18.405 [INFO][4685] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.444962 containerd[1590]: 2026-04-17 23:29:18.417 [INFO][4685] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"8e2b312e-b94b-42fd-8904-f8f9ac9041dd", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214", Pod:"calico-apiserver-7565646fd6-h7t75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia47afc8fdfb", MAC:"c2:08:00:f6:78:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:18.444962 containerd[1590]: 2026-04-17 23:29:18.436 [INFO][4685] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214" Namespace="calico-system" Pod="calico-apiserver-7565646fd6-h7t75" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:18.519542 containerd[1590]: time="2026-04-17T23:29:18.518655543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:18.519542 containerd[1590]: time="2026-04-17T23:29:18.519114418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:18.519542 containerd[1590]: time="2026-04-17T23:29:18.519140463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:18.519542 containerd[1590]: time="2026-04-17T23:29:18.519277765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:18.562406 containerd[1590]: time="2026-04-17T23:29:18.560935416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-m9z9b,Uid:cbe78961-645a-4299-9350-a4feb03fb521,Namespace:calico-system,Attempt:1,} returns sandbox id \"528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a\"" Apr 17 23:29:18.633340 containerd[1590]: time="2026-04-17T23:29:18.633293716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7565646fd6-h7t75,Uid:8e2b312e-b94b-42fd-8904-f8f9ac9041dd,Namespace:calico-system,Attempt:1,} returns sandbox id \"846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214\"" Apr 17 23:29:18.736615 containerd[1590]: time="2026-04-17T23:29:18.736305818Z" level=info msg="StopPodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\"" Apr 17 23:29:18.737202 containerd[1590]: time="2026-04-17T23:29:18.737175681Z" level=info msg="StopPodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\"" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.856 [INFO][4862] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.856 [INFO][4862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" iface="eth0" netns="/var/run/netns/cni-b98dffc6-8820-61e2-f46f-33b0689e51f8" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.856 [INFO][4862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" iface="eth0" netns="/var/run/netns/cni-b98dffc6-8820-61e2-f46f-33b0689e51f8" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.857 [INFO][4862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" iface="eth0" netns="/var/run/netns/cni-b98dffc6-8820-61e2-f46f-33b0689e51f8" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.857 [INFO][4862] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.857 [INFO][4862] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.930 [INFO][4876] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.930 [INFO][4876] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.930 [INFO][4876] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.949 [WARNING][4876] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.949 [INFO][4876] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.954 [INFO][4876] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:18.961373 containerd[1590]: 2026-04-17 23:29:18.958 [INFO][4862] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:18.965887 containerd[1590]: time="2026-04-17T23:29:18.965560641Z" level=info msg="TearDown network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" successfully" Apr 17 23:29:18.965887 containerd[1590]: time="2026-04-17T23:29:18.965605328Z" level=info msg="StopPodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" returns successfully" Apr 17 23:29:18.969236 systemd[1]: run-netns-cni\x2db98dffc6\x2d8820\x2d61e2\x2df46f\x2d33b0689e51f8.mount: Deactivated successfully. Apr 17 23:29:18.970977 containerd[1590]: time="2026-04-17T23:29:18.970442764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c84bc646f-h6gw8,Uid:27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7,Namespace:calico-system,Attempt:1,}" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.846 [INFO][4850] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.848 [INFO][4850] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" iface="eth0" netns="/var/run/netns/cni-076cc993-f20e-b199-aeb3-d9faf5cc3ce3" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.848 [INFO][4850] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" iface="eth0" netns="/var/run/netns/cni-076cc993-f20e-b199-aeb3-d9faf5cc3ce3" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.849 [INFO][4850] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" iface="eth0" netns="/var/run/netns/cni-076cc993-f20e-b199-aeb3-d9faf5cc3ce3" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.849 [INFO][4850] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.849 [INFO][4850] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.933 [INFO][4871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.933 [INFO][4871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.957 [INFO][4871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.976 [WARNING][4871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.976 [INFO][4871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.979 [INFO][4871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:18.988869 containerd[1590]: 2026-04-17 23:29:18.982 [INFO][4850] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:18.988869 containerd[1590]: time="2026-04-17T23:29:18.988702407Z" level=info msg="TearDown network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" successfully" Apr 17 23:29:18.988869 containerd[1590]: time="2026-04-17T23:29:18.988735892Z" level=info msg="StopPodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" returns successfully" Apr 17 23:29:18.993867 containerd[1590]: time="2026-04-17T23:29:18.992962707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kdlnn,Uid:c11c886e-9476-40bc-ba5f-51336e501e07,Namespace:kube-system,Attempt:1,}" Apr 17 23:29:18.995355 systemd[1]: run-netns-cni\x2d076cc993\x2df20e\x2db199\x2daeb3\x2dd9faf5cc3ce3.mount: Deactivated successfully. Apr 17 23:29:19.235832 containerd[1590]: time="2026-04-17T23:29:19.235669992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:19.238279 containerd[1590]: time="2026-04-17T23:29:19.237952281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 17 23:29:19.240970 containerd[1590]: time="2026-04-17T23:29:19.240546180Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:19.250901 containerd[1590]: time="2026-04-17T23:29:19.250640091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:19.252188 systemd-networkd[1234]: cali8333028ece2: Link UP Apr 17 23:29:19.253734 systemd-networkd[1234]: cali8333028ece2: Gained carrier Apr 17 23:29:19.257286 containerd[1590]: time="2026-04-17T23:29:19.257178868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.087614983s" Apr 17 23:29:19.257286 containerd[1590]: time="2026-04-17T23:29:19.257247999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:29:19.262510 containerd[1590]: time="2026-04-17T23:29:19.260940596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:29:19.270323 containerd[1590]: time="2026-04-17T23:29:19.270277865Z" level=info msg="CreateContainer within sandbox \"0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.121 [INFO][4895] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0 coredns-674b8bbfcf- kube-system c11c886e-9476-40bc-ba5f-51336e501e07 1020 0 2026-04-17 23:28:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 coredns-674b8bbfcf-kdlnn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8333028ece2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.122 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.164 [INFO][4915] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" HandleID="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.185 [INFO][4915] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" HandleID="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"coredns-674b8bbfcf-kdlnn", "timestamp":"2026-04-17 23:29:19.164789738 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000371340)} Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.185 [INFO][4915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.185 [INFO][4915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.185 [INFO][4915] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.192 [INFO][4915] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.201 [INFO][4915] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.210 [INFO][4915] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.214 [INFO][4915] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.218 [INFO][4915] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.218 [INFO][4915] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.222 [INFO][4915] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.229 [INFO][4915] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.239 [INFO][4915] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.198/26] block=192.168.124.192/26 handle="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.239 [INFO][4915] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.198/26] handle="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.239 [INFO][4915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:19.288318 containerd[1590]: 2026-04-17 23:29:19.239 [INFO][4915] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.198/26] IPv6=[] ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" HandleID="k8s-pod-network.b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.289299 containerd[1590]: 2026-04-17 23:29:19.246 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c11c886e-9476-40bc-ba5f-51336e501e07", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"coredns-674b8bbfcf-kdlnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8333028ece2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:19.289299 containerd[1590]: 2026-04-17 23:29:19.247 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.198/32] ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.289299 containerd[1590]: 2026-04-17 23:29:19.247 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8333028ece2 ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.289299 containerd[1590]: 2026-04-17 23:29:19.253 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.289299 containerd[1590]: 2026-04-17 23:29:19.254 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c11c886e-9476-40bc-ba5f-51336e501e07", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc", Pod:"coredns-674b8bbfcf-kdlnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8333028ece2", MAC:"32:5b:9d:d2:1a:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:19.289299 containerd[1590]: 2026-04-17 23:29:19.278 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc" Namespace="kube-system" Pod="coredns-674b8bbfcf-kdlnn" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:19.301892 containerd[1590]: time="2026-04-17T23:29:19.301799879Z" level=info msg="CreateContainer within sandbox \"0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"056f4c43b6c1f8518fa21661afe406cc6c0c70add26e6c336c5e7c11de138d51\"" Apr 17 23:29:19.320116 containerd[1590]: time="2026-04-17T23:29:19.319025663Z" level=info msg="StartContainer for \"056f4c43b6c1f8518fa21661afe406cc6c0c70add26e6c336c5e7c11de138d51\"" Apr 17 23:29:19.387440 containerd[1590]: time="2026-04-17T23:29:19.386817098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:19.387440 containerd[1590]: time="2026-04-17T23:29:19.386955360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:19.387440 containerd[1590]: time="2026-04-17T23:29:19.387005888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:19.387440 containerd[1590]: time="2026-04-17T23:29:19.387260690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:19.432149 systemd-networkd[1234]: cali973d5daca83: Link UP Apr 17 23:29:19.439822 systemd-networkd[1234]: cali973d5daca83: Gained carrier Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.072 [INFO][4885] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0 calico-kube-controllers-c84bc646f- calico-system 27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7 1021 0 2026-04-17 23:28:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c84bc646f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 calico-kube-controllers-c84bc646f-h6gw8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali973d5daca83 [] [] }} ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.073 [INFO][4885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.163 [INFO][4908] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" HandleID="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.191 [INFO][4908] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" HandleID="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"calico-kube-controllers-c84bc646f-h6gw8", "timestamp":"2026-04-17 23:29:19.163701842 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184840)} Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.191 [INFO][4908] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.240 [INFO][4908] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.241 [INFO][4908] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.306 [INFO][4908] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.320 [INFO][4908] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.334 [INFO][4908] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.339 [INFO][4908] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.346 [INFO][4908] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.346 [INFO][4908] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.349 [INFO][4908] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.372 [INFO][4908] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.405 [INFO][4908] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.199/26] block=192.168.124.192/26 handle="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.405 [INFO][4908] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.199/26] handle="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.405 [INFO][4908] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:19.488172 containerd[1590]: 2026-04-17 23:29:19.405 [INFO][4908] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.199/26] IPv6=[] ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" HandleID="k8s-pod-network.b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.488731 containerd[1590]: 2026-04-17 23:29:19.419 [INFO][4885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0", GenerateName:"calico-kube-controllers-c84bc646f-", Namespace:"calico-system", SelfLink:"", UID:"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c84bc646f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"calico-kube-controllers-c84bc646f-h6gw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali973d5daca83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:19.488731 containerd[1590]: 2026-04-17 23:29:19.420 [INFO][4885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.199/32] ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.488731 containerd[1590]: 2026-04-17 23:29:19.420 [INFO][4885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali973d5daca83 ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.488731 containerd[1590]: 2026-04-17 23:29:19.445 [INFO][4885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.488731 containerd[1590]: 2026-04-17 23:29:19.450 [INFO][4885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0", GenerateName:"calico-kube-controllers-c84bc646f-", Namespace:"calico-system", SelfLink:"", UID:"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c84bc646f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa", Pod:"calico-kube-controllers-c84bc646f-h6gw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali973d5daca83", MAC:"ce:af:73:a4:67:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:19.488731 containerd[1590]: 2026-04-17 23:29:19.483 [INFO][4885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa" Namespace="calico-system" Pod="calico-kube-controllers-c84bc646f-h6gw8" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:19.493736 containerd[1590]: time="2026-04-17T23:29:19.493614877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kdlnn,Uid:c11c886e-9476-40bc-ba5f-51336e501e07,Namespace:kube-system,Attempt:1,} returns sandbox id \"b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc\"" Apr 17 23:29:19.500686 containerd[1590]: time="2026-04-17T23:29:19.500502990Z" level=info msg="CreateContainer within sandbox \"b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:29:19.551136 containerd[1590]: time="2026-04-17T23:29:19.550086883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:19.551136 containerd[1590]: time="2026-04-17T23:29:19.550156454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:19.551136 containerd[1590]: time="2026-04-17T23:29:19.550173417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:19.551136 containerd[1590]: time="2026-04-17T23:29:19.550280874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:19.556431 containerd[1590]: time="2026-04-17T23:29:19.556383620Z" level=info msg="StartContainer for \"056f4c43b6c1f8518fa21661afe406cc6c0c70add26e6c336c5e7c11de138d51\" returns successfully" Apr 17 23:29:19.563480 containerd[1590]: time="2026-04-17T23:29:19.563427038Z" level=info msg="CreateContainer within sandbox \"b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"baa3e26ec7d2a5337d30f2865f677d1950b63bd213ddb58edd9ae9c8a93470f8\"" Apr 17 23:29:19.565054 containerd[1590]: time="2026-04-17T23:29:19.565001413Z" level=info msg="StartContainer for \"baa3e26ec7d2a5337d30f2865f677d1950b63bd213ddb58edd9ae9c8a93470f8\"" Apr 17 23:29:19.575336 systemd-networkd[1234]: calia47afc8fdfb: Gained IPv6LL Apr 17 23:29:19.676300 containerd[1590]: time="2026-04-17T23:29:19.676258392Z" level=info msg="StartContainer for \"baa3e26ec7d2a5337d30f2865f677d1950b63bd213ddb58edd9ae9c8a93470f8\" returns successfully" Apr 17 23:29:19.693939 containerd[1590]: time="2026-04-17T23:29:19.693841474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c84bc646f-h6gw8,Uid:27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7,Namespace:calico-system,Attempt:1,} returns sandbox id \"b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa\"" Apr 17 23:29:19.737551 containerd[1590]: time="2026-04-17T23:29:19.736951080Z" level=info msg="StopPodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\"" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.818 [INFO][5134] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.818 [INFO][5134] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" iface="eth0" netns="/var/run/netns/cni-0d4c1e2a-dda7-c0f2-5056-46efcf524b71" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.819 [INFO][5134] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" iface="eth0" netns="/var/run/netns/cni-0d4c1e2a-dda7-c0f2-5056-46efcf524b71" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.823 [INFO][5134] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" iface="eth0" netns="/var/run/netns/cni-0d4c1e2a-dda7-c0f2-5056-46efcf524b71" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.824 [INFO][5134] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.824 [INFO][5134] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.865 [INFO][5143] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.865 [INFO][5143] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.866 [INFO][5143] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.879 [WARNING][5143] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.879 [INFO][5143] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.882 [INFO][5143] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:19.890997 containerd[1590]: 2026-04-17 23:29:19.885 [INFO][5134] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:19.894653 containerd[1590]: time="2026-04-17T23:29:19.894088794Z" level=info msg="TearDown network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" successfully" Apr 17 23:29:19.894653 containerd[1590]: time="2026-04-17T23:29:19.894130681Z" level=info msg="StopPodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" returns successfully" Apr 17 23:29:19.895983 containerd[1590]: time="2026-04-17T23:29:19.895630003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frnhq,Uid:c8d00a3a-510e-455b-af6d-68cf933b1622,Namespace:kube-system,Attempt:1,}" Apr 17 23:29:19.932897 systemd[1]: run-netns-cni\x2d0d4c1e2a\x2ddda7\x2dc0f2\x2d5056\x2d46efcf524b71.mount: Deactivated successfully. Apr 17 23:29:20.090725 systemd-networkd[1234]: calic181cfb3f7c: Gained IPv6LL Apr 17 23:29:20.113779 kubelet[2685]: I0417 23:29:20.112670 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kdlnn" podStartSLOduration=47.112652495 podStartE2EDuration="47.112652495s" podCreationTimestamp="2026-04-17 23:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:29:20.112275155 +0000 UTC m=+53.537030451" watchObservedRunningTime="2026-04-17 23:29:20.112652495 +0000 UTC m=+53.537407791" Apr 17 23:29:20.155596 systemd-networkd[1234]: cali3b44154a461: Link UP Apr 17 23:29:20.166192 systemd-networkd[1234]: cali3b44154a461: Gained carrier Apr 17 23:29:20.201867 kubelet[2685]: I0417 23:29:20.201660 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7565646fd6-lg558" podStartSLOduration=31.111130934 podStartE2EDuration="34.201633996s" podCreationTimestamp="2026-04-17 23:28:46 +0000 UTC" firstStartedPulling="2026-04-17 23:29:16.168205293 +0000 UTC m=+49.592960549" lastFinishedPulling="2026-04-17 23:29:19.258708315 +0000 UTC m=+52.683463611" observedRunningTime="2026-04-17 23:29:20.173299493 +0000 UTC m=+53.598054789" watchObservedRunningTime="2026-04-17 23:29:20.201633996 +0000 UTC m=+53.626389292" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:19.992 [INFO][5152] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0 coredns-674b8bbfcf- kube-system c8d00a3a-510e-455b-af6d-68cf933b1622 1036 0 2026-04-17 23:28:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-9c3210a1b0 coredns-674b8bbfcf-frnhq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b44154a461 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:19.992 [INFO][5152] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.039 [INFO][5165] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" HandleID="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.055 [INFO][5165] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" HandleID="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273af0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-9c3210a1b0", "pod":"coredns-674b8bbfcf-frnhq", "timestamp":"2026-04-17 23:29:20.039717824 +0000 UTC"}, Hostname:"ci-4081-3-6-n-9c3210a1b0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000228dc0)} Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.055 [INFO][5165] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.055 [INFO][5165] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.056 [INFO][5165] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-9c3210a1b0' Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.059 [INFO][5165] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.075 [INFO][5165] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.083 [INFO][5165] ipam/ipam.go 526: Trying affinity for 192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.092 [INFO][5165] ipam/ipam.go 160: Attempting to load block cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.103 [INFO][5165] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.124.192/26 host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.103 [INFO][5165] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.124.192/26 handle="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.106 [INFO][5165] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714 Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.114 [INFO][5165] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.124.192/26 handle="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.124 [INFO][5165] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.124.200/26] block=192.168.124.192/26 handle="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.124 [INFO][5165] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.124.200/26] handle="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" host="ci-4081-3-6-n-9c3210a1b0" Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.124 [INFO][5165] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:20.206329 containerd[1590]: 2026-04-17 23:29:20.124 [INFO][5165] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.124.200/26] IPv6=[] ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" HandleID="k8s-pod-network.07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.212827 containerd[1590]: 2026-04-17 23:29:20.129 [INFO][5152] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8d00a3a-510e-455b-af6d-68cf933b1622", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"", Pod:"coredns-674b8bbfcf-frnhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b44154a461", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:20.212827 containerd[1590]: 2026-04-17 23:29:20.129 [INFO][5152] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.200/32] ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.212827 containerd[1590]: 2026-04-17 23:29:20.129 [INFO][5152] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b44154a461 ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.212827 containerd[1590]: 2026-04-17 23:29:20.174 [INFO][5152] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.212827 containerd[1590]: 2026-04-17 23:29:20.176 [INFO][5152] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8d00a3a-510e-455b-af6d-68cf933b1622", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714", Pod:"coredns-674b8bbfcf-frnhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b44154a461", MAC:"4a:ea:91:81:3f:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:20.212827 containerd[1590]: 2026-04-17 23:29:20.197 [INFO][5152] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714" Namespace="kube-system" Pod="coredns-674b8bbfcf-frnhq" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:20.257132 containerd[1590]: time="2026-04-17T23:29:20.256508317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:29:20.257536 containerd[1590]: time="2026-04-17T23:29:20.257166621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:29:20.257536 containerd[1590]: time="2026-04-17T23:29:20.257245954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:20.258907 containerd[1590]: time="2026-04-17T23:29:20.257662340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:29:20.348922 containerd[1590]: time="2026-04-17T23:29:20.348876396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-frnhq,Uid:c8d00a3a-510e-455b-af6d-68cf933b1622,Namespace:kube-system,Attempt:1,} returns sandbox id \"07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714\"" Apr 17 23:29:20.361628 containerd[1590]: time="2026-04-17T23:29:20.361457276Z" level=info msg="CreateContainer within sandbox \"07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:29:20.384477 containerd[1590]: time="2026-04-17T23:29:20.384263460Z" level=info msg="CreateContainer within sandbox \"07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02f9249b402122ce293ba662d156c981b99957658fe3463508501328748f2b9f\"" Apr 17 23:29:20.387162 containerd[1590]: time="2026-04-17T23:29:20.386993534Z" level=info msg="StartContainer for \"02f9249b402122ce293ba662d156c981b99957658fe3463508501328748f2b9f\"" Apr 17 23:29:20.468202 containerd[1590]: time="2026-04-17T23:29:20.467862546Z" level=info msg="StartContainer for \"02f9249b402122ce293ba662d156c981b99957658fe3463508501328748f2b9f\" returns successfully" Apr 17 23:29:21.047266 systemd-networkd[1234]: cali8333028ece2: Gained IPv6LL Apr 17 23:29:21.113270 systemd-networkd[1234]: cali973d5daca83: Gained IPv6LL Apr 17 23:29:21.144416 kubelet[2685]: I0417 23:29:21.141438 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:21.203079 kubelet[2685]: I0417 23:29:21.202628 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-frnhq" podStartSLOduration=48.202608127 podStartE2EDuration="48.202608127s" podCreationTimestamp="2026-04-17 23:28:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:29:21.165280048 +0000 UTC m=+54.590035424" watchObservedRunningTime="2026-04-17 23:29:21.202608127 +0000 UTC m=+54.627363383" Apr 17 23:29:21.304298 systemd-networkd[1234]: cali3b44154a461: Gained IPv6LL Apr 17 23:29:21.744190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2537755782.mount: Deactivated successfully. Apr 17 23:29:22.098945 containerd[1590]: time="2026-04-17T23:29:22.098793029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:22.100702 containerd[1590]: time="2026-04-17T23:29:22.100653635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 17 23:29:22.101747 containerd[1590]: time="2026-04-17T23:29:22.101339541Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:22.104695 containerd[1590]: time="2026-04-17T23:29:22.104642410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:22.105789 containerd[1590]: time="2026-04-17T23:29:22.105614960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.844625476s" Apr 17 23:29:22.105789 containerd[1590]: time="2026-04-17T23:29:22.105665607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 17 23:29:22.108083 containerd[1590]: time="2026-04-17T23:29:22.107944118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:29:22.118745 containerd[1590]: time="2026-04-17T23:29:22.118672251Z" level=info msg="CreateContainer within sandbox \"528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:29:22.151805 containerd[1590]: time="2026-04-17T23:29:22.151637010Z" level=info msg="CreateContainer within sandbox \"528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2bcb67c36427cb1241ab8a808b087477acbe1a3bd28a69c582e0d5d88317108c\"" Apr 17 23:29:22.154095 containerd[1590]: time="2026-04-17T23:29:22.152706854Z" level=info msg="StartContainer for \"2bcb67c36427cb1241ab8a808b087477acbe1a3bd28a69c582e0d5d88317108c\"" Apr 17 23:29:22.269823 containerd[1590]: time="2026-04-17T23:29:22.269704278Z" level=info msg="StartContainer for \"2bcb67c36427cb1241ab8a808b087477acbe1a3bd28a69c582e0d5d88317108c\" returns successfully" Apr 17 23:29:22.491319 containerd[1590]: time="2026-04-17T23:29:22.490789978Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:22.492092 containerd[1590]: time="2026-04-17T23:29:22.491913111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:29:22.498964 containerd[1590]: time="2026-04-17T23:29:22.498854460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 390.683907ms" Apr 17 23:29:22.498964 containerd[1590]: time="2026-04-17T23:29:22.498930192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 17 23:29:22.502797 containerd[1590]: time="2026-04-17T23:29:22.502654326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:29:22.505953 containerd[1590]: time="2026-04-17T23:29:22.505366223Z" level=info msg="CreateContainer within sandbox \"846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:29:22.526522 containerd[1590]: time="2026-04-17T23:29:22.526443750Z" level=info msg="CreateContainer within sandbox \"846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"80bd469e7a5db90b4715a8353a340fa609293e352e627ef5d1f4af6def856139\"" Apr 17 23:29:22.529363 containerd[1590]: time="2026-04-17T23:29:22.529309992Z" level=info msg="StartContainer for \"80bd469e7a5db90b4715a8353a340fa609293e352e627ef5d1f4af6def856139\"" Apr 17 23:29:22.601320 containerd[1590]: time="2026-04-17T23:29:22.601239233Z" level=info msg="StartContainer for \"80bd469e7a5db90b4715a8353a340fa609293e352e627ef5d1f4af6def856139\" returns successfully" Apr 17 23:29:23.133857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921743594.mount: Deactivated successfully. Apr 17 23:29:23.204385 kubelet[2685]: I0417 23:29:23.197171 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-m9z9b" podStartSLOduration=32.654774938 podStartE2EDuration="36.197152883s" podCreationTimestamp="2026-04-17 23:28:47 +0000 UTC" firstStartedPulling="2026-04-17 23:29:18.565198837 +0000 UTC m=+51.989954133" lastFinishedPulling="2026-04-17 23:29:22.107576782 +0000 UTC m=+55.532332078" observedRunningTime="2026-04-17 23:29:23.175300285 +0000 UTC m=+56.600055581" watchObservedRunningTime="2026-04-17 23:29:23.197152883 +0000 UTC m=+56.621908139" Apr 17 23:29:23.873532 systemd[1]: Started sshd@7-91.99.151.60:22-50.85.169.122:53132.service - OpenSSH per-connection server daemon (50.85.169.122:53132). Apr 17 23:29:24.014559 sshd[5422]: Accepted publickey for core from 50.85.169.122 port 53132 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:24.020879 sshd[5422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:24.036079 systemd-logind[1562]: New session 8 of user core. Apr 17 23:29:24.043832 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:29:24.327482 sshd[5422]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:24.337789 systemd[1]: sshd@7-91.99.151.60:22-50.85.169.122:53132.service: Deactivated successfully. Apr 17 23:29:24.348974 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:29:24.354296 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:29:24.355752 systemd-logind[1562]: Removed session 8. Apr 17 23:29:24.679351 kubelet[2685]: I0417 23:29:24.678733 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7565646fd6-h7t75" podStartSLOduration=34.815234384 podStartE2EDuration="38.678716612s" podCreationTimestamp="2026-04-17 23:28:46 +0000 UTC" firstStartedPulling="2026-04-17 23:29:18.636772408 +0000 UTC m=+52.061527704" lastFinishedPulling="2026-04-17 23:29:22.500254636 +0000 UTC m=+55.925009932" observedRunningTime="2026-04-17 23:29:23.20312411 +0000 UTC m=+56.627879406" watchObservedRunningTime="2026-04-17 23:29:24.678716612 +0000 UTC m=+58.103471908" Apr 17 23:29:25.181097 containerd[1590]: time="2026-04-17T23:29:25.180433329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:25.184450 containerd[1590]: time="2026-04-17T23:29:25.181601702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 17 23:29:25.184450 containerd[1590]: time="2026-04-17T23:29:25.182989347Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:25.189661 containerd[1590]: time="2026-04-17T23:29:25.189597964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:29:25.191392 containerd[1590]: time="2026-04-17T23:29:25.191348783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.688648811s" Apr 17 23:29:25.191492 containerd[1590]: time="2026-04-17T23:29:25.191402391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 17 23:29:25.216080 containerd[1590]: time="2026-04-17T23:29:25.215713305Z" level=info msg="CreateContainer within sandbox \"b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:29:25.234715 containerd[1590]: time="2026-04-17T23:29:25.234524846Z" level=info msg="CreateContainer within sandbox \"b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fc43916c0ab588274507c907dac7edadfd173c4dab7693abb60b433d9e67a13f\"" Apr 17 23:29:25.237949 containerd[1590]: time="2026-04-17T23:29:25.236454051Z" level=info msg="StartContainer for \"fc43916c0ab588274507c907dac7edadfd173c4dab7693abb60b433d9e67a13f\"" Apr 17 23:29:25.282240 systemd[1]: run-containerd-runc-k8s.io-fc43916c0ab588274507c907dac7edadfd173c4dab7693abb60b433d9e67a13f-runc.5Grs59.mount: Deactivated successfully. Apr 17 23:29:25.322518 containerd[1590]: time="2026-04-17T23:29:25.320371457Z" level=info msg="StartContainer for \"fc43916c0ab588274507c907dac7edadfd173c4dab7693abb60b433d9e67a13f\" returns successfully" Apr 17 23:29:26.191070 kubelet[2685]: I0417 23:29:26.190946 2685 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c84bc646f-h6gw8" podStartSLOduration=30.696228966 podStartE2EDuration="36.190928372s" podCreationTimestamp="2026-04-17 23:28:50 +0000 UTC" firstStartedPulling="2026-04-17 23:29:19.697404649 +0000 UTC m=+53.122159945" lastFinishedPulling="2026-04-17 23:29:25.192104055 +0000 UTC m=+58.616859351" observedRunningTime="2026-04-17 23:29:26.190811275 +0000 UTC m=+59.615566571" watchObservedRunningTime="2026-04-17 23:29:26.190928372 +0000 UTC m=+59.615683668" Apr 17 23:29:26.744755 containerd[1590]: time="2026-04-17T23:29:26.744712314Z" level=info msg="StopPodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\"" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.786 [WARNING][5574] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c11c886e-9476-40bc-ba5f-51336e501e07", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc", Pod:"coredns-674b8bbfcf-kdlnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8333028ece2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.787 [INFO][5574] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.787 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" iface="eth0" netns="" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.787 [INFO][5574] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.787 [INFO][5574] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.826 [INFO][5581] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.826 [INFO][5581] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.826 [INFO][5581] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.838 [WARNING][5581] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.838 [INFO][5581] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.840 [INFO][5581] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:26.845923 containerd[1590]: 2026-04-17 23:29:26.843 [INFO][5574] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.847392 containerd[1590]: time="2026-04-17T23:29:26.845961659Z" level=info msg="TearDown network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" successfully" Apr 17 23:29:26.847392 containerd[1590]: time="2026-04-17T23:29:26.845994824Z" level=info msg="StopPodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" returns successfully" Apr 17 23:29:26.847392 containerd[1590]: time="2026-04-17T23:29:26.846681404Z" level=info msg="RemovePodSandbox for \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\"" Apr 17 23:29:26.849293 containerd[1590]: time="2026-04-17T23:29:26.849246978Z" level=info msg="Forcibly stopping sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\"" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.916 [WARNING][5595] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c11c886e-9476-40bc-ba5f-51336e501e07", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"b7f332377e1ee339a2b575922db1ed591f7a77684d8acbeafad7686984a751cc", Pod:"coredns-674b8bbfcf-kdlnn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8333028ece2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.916 [INFO][5595] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.916 [INFO][5595] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" iface="eth0" netns="" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.916 [INFO][5595] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.916 [INFO][5595] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.942 [INFO][5602] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.942 [INFO][5602] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.942 [INFO][5602] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.953 [WARNING][5602] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.954 [INFO][5602] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" HandleID="k8s-pod-network.a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--kdlnn-eth0" Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.956 [INFO][5602] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:26.963151 containerd[1590]: 2026-04-17 23:29:26.960 [INFO][5595] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e" Apr 17 23:29:26.963679 containerd[1590]: time="2026-04-17T23:29:26.963224621Z" level=info msg="TearDown network for sandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" successfully" Apr 17 23:29:26.976367 containerd[1590]: time="2026-04-17T23:29:26.976259604Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:26.976367 containerd[1590]: time="2026-04-17T23:29:26.976394904Z" level=info msg="RemovePodSandbox \"a3bfa17d175a355667cba403d4d757d9f6748ccf2e9aac5453bf89d7d0c8ee9e\" returns successfully" Apr 17 23:29:26.977022 containerd[1590]: time="2026-04-17T23:29:26.976987351Z" level=info msg="StopPodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\"" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.027 [WARNING][5616] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cbe78961-645a-4299-9350-a4feb03fb521", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a", Pod:"goldmane-5b85766d88-m9z9b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic181cfb3f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.027 [INFO][5616] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.028 [INFO][5616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" iface="eth0" netns="" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.028 [INFO][5616] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.028 [INFO][5616] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.060 [INFO][5623] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.060 [INFO][5623] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.060 [INFO][5623] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.072 [WARNING][5623] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.072 [INFO][5623] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.074 [INFO][5623] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.087464 containerd[1590]: 2026-04-17 23:29:27.079 [INFO][5616] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.087464 containerd[1590]: time="2026-04-17T23:29:27.087247304Z" level=info msg="TearDown network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" successfully" Apr 17 23:29:27.087464 containerd[1590]: time="2026-04-17T23:29:27.087272067Z" level=info msg="StopPodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" returns successfully" Apr 17 23:29:27.089478 containerd[1590]: time="2026-04-17T23:29:27.089070727Z" level=info msg="RemovePodSandbox for \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\"" Apr 17 23:29:27.089478 containerd[1590]: time="2026-04-17T23:29:27.089111853Z" level=info msg="Forcibly stopping sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\"" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.155 [WARNING][5639] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"cbe78961-645a-4299-9350-a4feb03fb521", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"528ad417f9d217db2410f07aed68e5350223c86fd363c8ebd738a9cdf7661e3a", Pod:"goldmane-5b85766d88-m9z9b", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic181cfb3f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.155 [INFO][5639] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.155 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" iface="eth0" netns="" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.155 [INFO][5639] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.155 [INFO][5639] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.183 [INFO][5646] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.183 [INFO][5646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.183 [INFO][5646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.197 [WARNING][5646] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.197 [INFO][5646] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" HandleID="k8s-pod-network.97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-goldmane--5b85766d88--m9z9b-eth0" Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.200 [INFO][5646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.206213 containerd[1590]: 2026-04-17 23:29:27.203 [INFO][5639] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57" Apr 17 23:29:27.206213 containerd[1590]: time="2026-04-17T23:29:27.206170666Z" level=info msg="TearDown network for sandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" successfully" Apr 17 23:29:27.212034 containerd[1590]: time="2026-04-17T23:29:27.211979744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:27.212165 containerd[1590]: time="2026-04-17T23:29:27.212140047Z" level=info msg="RemovePodSandbox \"97e62805586118cd0cd80141afe085be33431c3dec65f368cdb1a10f63df8c57\" returns successfully" Apr 17 23:29:27.212947 containerd[1590]: time="2026-04-17T23:29:27.212905077Z" level=info msg="StopPodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\"" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.256 [WARNING][5660] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.256 [INFO][5660] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.256 [INFO][5660] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" iface="eth0" netns="" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.256 [INFO][5660] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.256 [INFO][5660] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.278 [INFO][5668] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.279 [INFO][5668] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.279 [INFO][5668] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.291 [WARNING][5668] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.291 [INFO][5668] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.295 [INFO][5668] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.300006 containerd[1590]: 2026-04-17 23:29:27.298 [INFO][5660] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.300746 containerd[1590]: time="2026-04-17T23:29:27.300598733Z" level=info msg="TearDown network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" successfully" Apr 17 23:29:27.300746 containerd[1590]: time="2026-04-17T23:29:27.300636938Z" level=info msg="StopPodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" returns successfully" Apr 17 23:29:27.301419 containerd[1590]: time="2026-04-17T23:29:27.301239345Z" level=info msg="RemovePodSandbox for \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\"" Apr 17 23:29:27.301419 containerd[1590]: time="2026-04-17T23:29:27.301278271Z" level=info msg="Forcibly stopping sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\"" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.350 [WARNING][5682] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" WorkloadEndpoint="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.350 [INFO][5682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.350 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" iface="eth0" netns="" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.350 [INFO][5682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.350 [INFO][5682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.379 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.379 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.379 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.395 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.395 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" HandleID="k8s-pod-network.d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-whisker--54865d9c69--vrxcx-eth0" Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.397 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.402883 containerd[1590]: 2026-04-17 23:29:27.400 [INFO][5682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801" Apr 17 23:29:27.402883 containerd[1590]: time="2026-04-17T23:29:27.402712429Z" level=info msg="TearDown network for sandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" successfully" Apr 17 23:29:27.407705 containerd[1590]: time="2026-04-17T23:29:27.407656662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:27.407869 containerd[1590]: time="2026-04-17T23:29:27.407778720Z" level=info msg="RemovePodSandbox \"d18ad3499aea932b35e861a26de6b6fa5e5ea5f4938f9e97bc6d6d66c9cd9801\" returns successfully" Apr 17 23:29:27.408715 containerd[1590]: time="2026-04-17T23:29:27.408393929Z" level=info msg="StopPodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\"" Apr 17 23:29:27.435025 kubelet[2685]: I0417 23:29:27.434950 2685 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.455 [WARNING][5703] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0", GenerateName:"calico-kube-controllers-c84bc646f-", Namespace:"calico-system", SelfLink:"", UID:"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c84bc646f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa", Pod:"calico-kube-controllers-c84bc646f-h6gw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali973d5daca83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.457 [INFO][5703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.457 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" iface="eth0" netns="" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.458 [INFO][5703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.458 [INFO][5703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.526 [INFO][5711] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.526 [INFO][5711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.526 [INFO][5711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.539 [WARNING][5711] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.539 [INFO][5711] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.542 [INFO][5711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.547192 containerd[1590]: 2026-04-17 23:29:27.544 [INFO][5703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.547192 containerd[1590]: time="2026-04-17T23:29:27.546761337Z" level=info msg="TearDown network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" successfully" Apr 17 23:29:27.547192 containerd[1590]: time="2026-04-17T23:29:27.546787901Z" level=info msg="StopPodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" returns successfully" Apr 17 23:29:27.548659 containerd[1590]: time="2026-04-17T23:29:27.547958990Z" level=info msg="RemovePodSandbox for \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\"" Apr 17 23:29:27.548659 containerd[1590]: time="2026-04-17T23:29:27.547994955Z" level=info msg="Forcibly stopping sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\"" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.606 [WARNING][5727] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0", GenerateName:"calico-kube-controllers-c84bc646f-", Namespace:"calico-system", SelfLink:"", UID:"27ea37b5-e978-46f7-9ce2-f5c0e94e4ba7", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c84bc646f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"b917839a8dfcdd4c876e326105c3d7ce4605914611a11083ff25093ded3fe1aa", Pod:"calico-kube-controllers-c84bc646f-h6gw8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali973d5daca83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.606 [INFO][5727] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.606 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" iface="eth0" netns="" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.606 [INFO][5727] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.606 [INFO][5727] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.637 [INFO][5735] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.640 [INFO][5735] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.640 [INFO][5735] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.651 [WARNING][5735] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.651 [INFO][5735] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" HandleID="k8s-pod-network.bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--kube--controllers--c84bc646f--h6gw8-eth0" Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.656 [INFO][5735] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.664282 containerd[1590]: 2026-04-17 23:29:27.662 [INFO][5727] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f" Apr 17 23:29:27.664867 containerd[1590]: time="2026-04-17T23:29:27.664839017Z" level=info msg="TearDown network for sandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" successfully" Apr 17 23:29:27.669329 containerd[1590]: time="2026-04-17T23:29:27.669282338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:27.669534 containerd[1590]: time="2026-04-17T23:29:27.669512691Z" level=info msg="RemovePodSandbox \"bd785aa89335c50bebc876a3c45584d8866b3ade937b9383a0b42178fae8612f\" returns successfully" Apr 17 23:29:27.670558 containerd[1590]: time="2026-04-17T23:29:27.670526638Z" level=info msg="StopPodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\"" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.713 [WARNING][5749] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"8e2b312e-b94b-42fd-8904-f8f9ac9041dd", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214", Pod:"calico-apiserver-7565646fd6-h7t75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia47afc8fdfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.713 [INFO][5749] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.713 [INFO][5749] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" iface="eth0" netns="" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.713 [INFO][5749] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.713 [INFO][5749] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.735 [INFO][5756] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.736 [INFO][5756] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.736 [INFO][5756] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.748 [WARNING][5756] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.748 [INFO][5756] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.751 [INFO][5756] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.756345 containerd[1590]: 2026-04-17 23:29:27.753 [INFO][5749] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.758144 containerd[1590]: time="2026-04-17T23:29:27.756394629Z" level=info msg="TearDown network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" successfully" Apr 17 23:29:27.758144 containerd[1590]: time="2026-04-17T23:29:27.756422673Z" level=info msg="StopPodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" returns successfully" Apr 17 23:29:27.758144 containerd[1590]: time="2026-04-17T23:29:27.756932867Z" level=info msg="RemovePodSandbox for \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\"" Apr 17 23:29:27.758144 containerd[1590]: time="2026-04-17T23:29:27.756984795Z" level=info msg="Forcibly stopping sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\"" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.807 [WARNING][5770] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"8e2b312e-b94b-42fd-8904-f8f9ac9041dd", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"846f68f0ff31a90dde45eedf8759e954b6d0d5328f51535c9d7bd72fdf170214", Pod:"calico-apiserver-7565646fd6-h7t75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia47afc8fdfb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.807 [INFO][5770] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.807 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" iface="eth0" netns="" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.807 [INFO][5770] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.807 [INFO][5770] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.830 [INFO][5777] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.830 [INFO][5777] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.830 [INFO][5777] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.843 [WARNING][5777] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.843 [INFO][5777] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" HandleID="k8s-pod-network.83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--h7t75-eth0" Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.846 [INFO][5777] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.850549 containerd[1590]: 2026-04-17 23:29:27.848 [INFO][5770] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be" Apr 17 23:29:27.851524 containerd[1590]: time="2026-04-17T23:29:27.850708240Z" level=info msg="TearDown network for sandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" successfully" Apr 17 23:29:27.856258 containerd[1590]: time="2026-04-17T23:29:27.856214715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:27.856577 containerd[1590]: time="2026-04-17T23:29:27.856470071Z" level=info msg="RemovePodSandbox \"83b164bdfcb4662a96ba792f56c4830d57c9b252c5bf5cc1094e3ac223c7b6be\" returns successfully" Apr 17 23:29:27.857115 containerd[1590]: time="2026-04-17T23:29:27.857089761Z" level=info msg="StopPodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\"" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.902 [WARNING][5791] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8d00a3a-510e-455b-af6d-68cf933b1622", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714", Pod:"coredns-674b8bbfcf-frnhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b44154a461", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.902 [INFO][5791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.902 [INFO][5791] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" iface="eth0" netns="" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.902 [INFO][5791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.902 [INFO][5791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.925 [INFO][5799] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.926 [INFO][5799] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.926 [INFO][5799] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.936 [WARNING][5799] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.936 [INFO][5799] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.940 [INFO][5799] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:27.945002 containerd[1590]: 2026-04-17 23:29:27.943 [INFO][5791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:27.945002 containerd[1590]: time="2026-04-17T23:29:27.944986445Z" level=info msg="TearDown network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" successfully" Apr 17 23:29:27.945851 containerd[1590]: time="2026-04-17T23:29:27.945017690Z" level=info msg="StopPodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" returns successfully" Apr 17 23:29:27.945851 containerd[1590]: time="2026-04-17T23:29:27.945622057Z" level=info msg="RemovePodSandbox for \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\"" Apr 17 23:29:27.945851 containerd[1590]: time="2026-04-17T23:29:27.945689187Z" level=info msg="Forcibly stopping sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\"" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:27.993 [WARNING][5813] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8d00a3a-510e-455b-af6d-68cf933b1622", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"07c5733b9a809e5d62e477a7cfa0cfa0da92a8e5b1b66d34ca41f6da7be3e714", Pod:"coredns-674b8bbfcf-frnhq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b44154a461", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:27.993 [INFO][5813] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:27.993 [INFO][5813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" iface="eth0" netns="" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:27.993 [INFO][5813] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:27.993 [INFO][5813] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.027 [INFO][5820] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.027 [INFO][5820] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.028 [INFO][5820] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.039 [WARNING][5820] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.039 [INFO][5820] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" HandleID="k8s-pod-network.5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-coredns--674b8bbfcf--frnhq-eth0" Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.042 [INFO][5820] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:28.050550 containerd[1590]: 2026-04-17 23:29:28.047 [INFO][5813] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8" Apr 17 23:29:28.050550 containerd[1590]: time="2026-04-17T23:29:28.050505314Z" level=info msg="TearDown network for sandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" successfully" Apr 17 23:29:28.056608 containerd[1590]: time="2026-04-17T23:29:28.056536695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:28.058484 containerd[1590]: time="2026-04-17T23:29:28.056627708Z" level=info msg="RemovePodSandbox \"5437331901a95a057151e32cdfb7baf4ec22afbf19695b978865967e0691e7b8\" returns successfully" Apr 17 23:29:28.060359 containerd[1590]: time="2026-04-17T23:29:28.060015471Z" level=info msg="StopPodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\"" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.104 [WARNING][5834] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31", Pod:"calico-apiserver-7565646fd6-lg558", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87eb23c0096", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.104 [INFO][5834] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.104 [INFO][5834] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" iface="eth0" netns="" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.104 [INFO][5834] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.104 [INFO][5834] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.130 [INFO][5841] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.130 [INFO][5841] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.130 [INFO][5841] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.142 [WARNING][5841] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.142 [INFO][5841] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.144 [INFO][5841] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:28.148556 containerd[1590]: 2026-04-17 23:29:28.146 [INFO][5834] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.149268 containerd[1590]: time="2026-04-17T23:29:28.149093504Z" level=info msg="TearDown network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" successfully" Apr 17 23:29:28.149268 containerd[1590]: time="2026-04-17T23:29:28.149125388Z" level=info msg="StopPodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" returns successfully" Apr 17 23:29:28.150468 containerd[1590]: time="2026-04-17T23:29:28.149682908Z" level=info msg="RemovePodSandbox for \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\"" Apr 17 23:29:28.150468 containerd[1590]: time="2026-04-17T23:29:28.149715873Z" level=info msg="Forcibly stopping sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\"" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.198 [WARNING][5855] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0", GenerateName:"calico-apiserver-7565646fd6-", Namespace:"calico-system", SelfLink:"", UID:"97b6310d-7e82-44f9-91ad-fcd9e6ad7cc3", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7565646fd6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-9c3210a1b0", ContainerID:"0ecc5bbdc48b4ff8c924b11ebe4ca484843e0ebba9bcaaebd32d506e7d00af31", Pod:"calico-apiserver-7565646fd6-lg558", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali87eb23c0096", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.199 [INFO][5855] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.199 [INFO][5855] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" iface="eth0" netns="" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.199 [INFO][5855] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.199 [INFO][5855] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.222 [INFO][5862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.222 [INFO][5862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.222 [INFO][5862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.234 [WARNING][5862] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.235 [INFO][5862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" HandleID="k8s-pod-network.e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Workload="ci--4081--3--6--n--9c3210a1b0-k8s-calico--apiserver--7565646fd6--lg558-eth0" Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.237 [INFO][5862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:29:28.244480 containerd[1590]: 2026-04-17 23:29:28.239 [INFO][5855] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac" Apr 17 23:29:28.244480 containerd[1590]: time="2026-04-17T23:29:28.244348218Z" level=info msg="TearDown network for sandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" successfully" Apr 17 23:29:28.250647 containerd[1590]: time="2026-04-17T23:29:28.250598270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:29:28.250765 containerd[1590]: time="2026-04-17T23:29:28.250682242Z" level=info msg="RemovePodSandbox \"e75c02e397d074ae6ed86738c7e477244770a2a922ed2bb437f79b5f0285c5ac\" returns successfully" Apr 17 23:29:29.356096 systemd[1]: Started sshd@8-91.99.151.60:22-50.85.169.122:53142.service - OpenSSH per-connection server daemon (50.85.169.122:53142). Apr 17 23:29:29.485532 sshd[5868]: Accepted publickey for core from 50.85.169.122 port 53142 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:29.488322 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:29.493599 systemd-logind[1562]: New session 9 of user core. Apr 17 23:29:29.501551 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:29:29.710572 sshd[5868]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:29.716424 systemd[1]: sshd@8-91.99.151.60:22-50.85.169.122:53142.service: Deactivated successfully. Apr 17 23:29:29.722351 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:29:29.723453 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:29:29.724688 systemd-logind[1562]: Removed session 9. Apr 17 23:29:34.739101 systemd[1]: Started sshd@9-91.99.151.60:22-50.85.169.122:59076.service - OpenSSH per-connection server daemon (50.85.169.122:59076). Apr 17 23:29:34.858687 sshd[5921]: Accepted publickey for core from 50.85.169.122 port 59076 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:34.861535 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:34.867605 systemd-logind[1562]: New session 10 of user core. Apr 17 23:29:34.874922 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:29:35.075445 sshd[5921]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:35.081281 systemd[1]: sshd@9-91.99.151.60:22-50.85.169.122:59076.service: Deactivated successfully. Apr 17 23:29:35.087398 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:29:35.087762 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:29:35.090042 systemd-logind[1562]: Removed session 10. Apr 17 23:29:40.106592 systemd[1]: Started sshd@10-91.99.151.60:22-50.85.169.122:53886.service - OpenSSH per-connection server daemon (50.85.169.122:53886). Apr 17 23:29:40.252737 sshd[5943]: Accepted publickey for core from 50.85.169.122 port 53886 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:40.256397 sshd[5943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:40.263654 systemd-logind[1562]: New session 11 of user core. Apr 17 23:29:40.269738 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:29:40.465545 sshd[5943]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:40.473429 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:29:40.474600 systemd[1]: sshd@10-91.99.151.60:22-50.85.169.122:53886.service: Deactivated successfully. Apr 17 23:29:40.480843 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:29:40.482924 systemd-logind[1562]: Removed session 11. Apr 17 23:29:40.494538 systemd[1]: Started sshd@11-91.99.151.60:22-50.85.169.122:53890.service - OpenSSH per-connection server daemon (50.85.169.122:53890). Apr 17 23:29:40.621637 sshd[5977]: Accepted publickey for core from 50.85.169.122 port 53890 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:40.623979 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:40.629496 systemd-logind[1562]: New session 12 of user core. Apr 17 23:29:40.638132 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:29:40.903000 sshd[5977]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:40.922116 systemd[1]: sshd@11-91.99.151.60:22-50.85.169.122:53890.service: Deactivated successfully. Apr 17 23:29:40.931793 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:29:40.935564 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:29:40.952639 systemd[1]: Started sshd@12-91.99.151.60:22-50.85.169.122:53894.service - OpenSSH per-connection server daemon (50.85.169.122:53894). Apr 17 23:29:40.956559 systemd-logind[1562]: Removed session 12. Apr 17 23:29:41.095630 sshd[5989]: Accepted publickey for core from 50.85.169.122 port 53894 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:41.099292 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:41.107216 systemd-logind[1562]: New session 13 of user core. Apr 17 23:29:41.114576 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:29:41.299076 sshd[5989]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:41.307363 systemd[1]: sshd@12-91.99.151.60:22-50.85.169.122:53894.service: Deactivated successfully. Apr 17 23:29:41.312458 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:29:41.313484 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:29:41.314868 systemd-logind[1562]: Removed session 13. Apr 17 23:29:46.322395 systemd[1]: Started sshd@13-91.99.151.60:22-50.85.169.122:53904.service - OpenSSH per-connection server daemon (50.85.169.122:53904). Apr 17 23:29:46.438145 sshd[6044]: Accepted publickey for core from 50.85.169.122 port 53904 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:46.439412 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:46.444666 systemd-logind[1562]: New session 14 of user core. Apr 17 23:29:46.449533 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:29:46.635503 sshd[6044]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:46.644392 systemd[1]: sshd@13-91.99.151.60:22-50.85.169.122:53904.service: Deactivated successfully. Apr 17 23:29:46.651417 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:29:46.652663 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:29:46.654408 systemd-logind[1562]: Removed session 14. Apr 17 23:29:51.663250 systemd[1]: Started sshd@14-91.99.151.60:22-50.85.169.122:52606.service - OpenSSH per-connection server daemon (50.85.169.122:52606). Apr 17 23:29:51.783460 sshd[6072]: Accepted publickey for core from 50.85.169.122 port 52606 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:51.784894 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:51.791390 systemd-logind[1562]: New session 15 of user core. Apr 17 23:29:51.795483 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:29:51.982724 sshd[6072]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:51.989036 systemd[1]: sshd@14-91.99.151.60:22-50.85.169.122:52606.service: Deactivated successfully. Apr 17 23:29:51.996139 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:29:51.998455 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:29:52.014232 systemd[1]: Started sshd@15-91.99.151.60:22-50.85.169.122:52620.service - OpenSSH per-connection server daemon (50.85.169.122:52620). Apr 17 23:29:52.015213 systemd-logind[1562]: Removed session 15. Apr 17 23:29:52.143578 sshd[6086]: Accepted publickey for core from 50.85.169.122 port 52620 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:52.147923 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:52.153505 systemd-logind[1562]: New session 16 of user core. Apr 17 23:29:52.161613 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:29:52.526520 sshd[6086]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:52.532672 systemd[1]: sshd@15-91.99.151.60:22-50.85.169.122:52620.service: Deactivated successfully. Apr 17 23:29:52.539016 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:29:52.539876 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:29:52.544977 systemd-logind[1562]: Removed session 16. Apr 17 23:29:52.550368 systemd[1]: Started sshd@16-91.99.151.60:22-50.85.169.122:52634.service - OpenSSH per-connection server daemon (50.85.169.122:52634). Apr 17 23:29:52.681172 sshd[6098]: Accepted publickey for core from 50.85.169.122 port 52634 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:52.682996 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:52.689602 systemd-logind[1562]: New session 17 of user core. Apr 17 23:29:52.695914 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:29:53.581644 sshd[6098]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:53.592756 systemd[1]: sshd@16-91.99.151.60:22-50.85.169.122:52634.service: Deactivated successfully. Apr 17 23:29:53.608518 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:29:53.608820 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:29:53.623631 systemd[1]: Started sshd@17-91.99.151.60:22-50.85.169.122:52644.service - OpenSSH per-connection server daemon (50.85.169.122:52644). Apr 17 23:29:53.627410 systemd-logind[1562]: Removed session 17. Apr 17 23:29:53.773967 sshd[6124]: Accepted publickey for core from 50.85.169.122 port 52644 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:53.776271 sshd[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:53.782869 systemd-logind[1562]: New session 18 of user core. Apr 17 23:29:53.787498 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:29:54.124306 sshd[6124]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:54.133019 systemd[1]: sshd@17-91.99.151.60:22-50.85.169.122:52644.service: Deactivated successfully. Apr 17 23:29:54.142165 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:29:54.144103 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:29:54.158698 systemd[1]: Started sshd@18-91.99.151.60:22-50.85.169.122:52650.service - OpenSSH per-connection server daemon (50.85.169.122:52650). Apr 17 23:29:54.159601 systemd-logind[1562]: Removed session 18. Apr 17 23:29:54.282669 sshd[6139]: Accepted publickey for core from 50.85.169.122 port 52650 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:54.285091 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:54.292926 systemd-logind[1562]: New session 19 of user core. Apr 17 23:29:54.300183 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:29:54.510514 sshd[6139]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:54.517742 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:29:54.518781 systemd[1]: sshd@18-91.99.151.60:22-50.85.169.122:52650.service: Deactivated successfully. Apr 17 23:29:54.523905 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:29:54.525124 systemd-logind[1562]: Removed session 19. Apr 17 23:29:59.533016 systemd[1]: Started sshd@19-91.99.151.60:22-50.85.169.122:35726.service - OpenSSH per-connection server daemon (50.85.169.122:35726). Apr 17 23:29:59.658206 sshd[6195]: Accepted publickey for core from 50.85.169.122 port 35726 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:29:59.661457 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:29:59.667621 systemd-logind[1562]: New session 20 of user core. Apr 17 23:29:59.674453 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:29:59.854775 sshd[6195]: pam_unix(sshd:session): session closed for user core Apr 17 23:29:59.862906 systemd[1]: sshd@19-91.99.151.60:22-50.85.169.122:35726.service: Deactivated successfully. Apr 17 23:29:59.867686 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:29:59.869574 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:29:59.871471 systemd-logind[1562]: Removed session 20. Apr 17 23:30:04.880358 systemd[1]: Started sshd@20-91.99.151.60:22-50.85.169.122:35738.service - OpenSSH per-connection server daemon (50.85.169.122:35738). Apr 17 23:30:05.019858 sshd[6216]: Accepted publickey for core from 50.85.169.122 port 35738 ssh2: RSA SHA256:VfypDX1RTsDok1DcKRgqFkknflSVDpDNB07R6ghJc68 Apr 17 23:30:05.021472 sshd[6216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:05.028911 systemd-logind[1562]: New session 21 of user core. Apr 17 23:30:05.034558 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:30:05.220393 sshd[6216]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:05.225815 systemd[1]: sshd@20-91.99.151.60:22-50.85.169.122:35738.service: Deactivated successfully. Apr 17 23:30:05.236391 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:30:05.239973 systemd-logind[1562]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:30:05.242092 systemd-logind[1562]: Removed session 21. Apr 17 23:30:20.299740 containerd[1590]: time="2026-04-17T23:30:20.299672896Z" level=info msg="shim disconnected" id=a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4 namespace=k8s.io Apr 17 23:30:20.299740 containerd[1590]: time="2026-04-17T23:30:20.299733979Z" level=warning msg="cleaning up after shim disconnected" id=a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4 namespace=k8s.io Apr 17 23:30:20.299740 containerd[1590]: time="2026-04-17T23:30:20.299743859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:20.301441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4-rootfs.mount: Deactivated successfully. Apr 17 23:30:20.316099 containerd[1590]: time="2026-04-17T23:30:20.314726707Z" level=info msg="shim disconnected" id=577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207 namespace=k8s.io Apr 17 23:30:20.316099 containerd[1590]: time="2026-04-17T23:30:20.314794710Z" level=warning msg="cleaning up after shim disconnected" id=577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207 namespace=k8s.io Apr 17 23:30:20.316099 containerd[1590]: time="2026-04-17T23:30:20.314805351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:20.316554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207-rootfs.mount: Deactivated successfully. Apr 17 23:30:20.381809 kubelet[2685]: I0417 23:30:20.381760 2685 scope.go:117] "RemoveContainer" containerID="577500517293429c96188ba4ee5fed3c9a7682f1ef9156dc85ec7fed95567207" Apr 17 23:30:20.382550 kubelet[2685]: I0417 23:30:20.381894 2685 scope.go:117] "RemoveContainer" containerID="a7491295c0592c2fd7283d352ab506362111f053692353d569beb5e7b25ac4f4" Apr 17 23:30:20.385819 containerd[1590]: time="2026-04-17T23:30:20.385471275Z" level=info msg="CreateContainer within sandbox \"8d54ee83239d7fd7940f05ad55fda39ed491711a19c48053351a45b1fe3a0e09\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:30:20.386559 containerd[1590]: time="2026-04-17T23:30:20.386185148Z" level=info msg="CreateContainer within sandbox \"6fa1e17eca7ebb5f902da8bc8698fe99c971a828e33a2c98cadd563aa61a0abc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:30:20.410363 containerd[1590]: time="2026-04-17T23:30:20.410278494Z" level=info msg="CreateContainer within sandbox \"8d54ee83239d7fd7940f05ad55fda39ed491711a19c48053351a45b1fe3a0e09\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"56a5d37e087e6d7616ccc2262e4b86342c61dfd5f81a43bad446614a70732a4e\"" Apr 17 23:30:20.411753 containerd[1590]: time="2026-04-17T23:30:20.411710560Z" level=info msg="StartContainer for \"56a5d37e087e6d7616ccc2262e4b86342c61dfd5f81a43bad446614a70732a4e\"" Apr 17 23:30:20.416270 containerd[1590]: time="2026-04-17T23:30:20.416107202Z" level=info msg="CreateContainer within sandbox \"6fa1e17eca7ebb5f902da8bc8698fe99c971a828e33a2c98cadd563aa61a0abc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a5687a6a16bfffb97cdcab990b224285a0a9027e8c412ade7d96b937206376b7\"" Apr 17 23:30:20.417782 containerd[1590]: time="2026-04-17T23:30:20.416959121Z" level=info msg="StartContainer for \"a5687a6a16bfffb97cdcab990b224285a0a9027e8c412ade7d96b937206376b7\"" Apr 17 23:30:20.446084 kubelet[2685]: E0417 23:30:20.443362 2685 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:32806->10.0.0.2:2379: read: connection timed out" Apr 17 23:30:20.509714 containerd[1590]: time="2026-04-17T23:30:20.509253559Z" level=info msg="StartContainer for \"a5687a6a16bfffb97cdcab990b224285a0a9027e8c412ade7d96b937206376b7\" returns successfully" Apr 17 23:30:20.512370 containerd[1590]: time="2026-04-17T23:30:20.512237376Z" level=info msg="StartContainer for \"56a5d37e087e6d7616ccc2262e4b86342c61dfd5f81a43bad446614a70732a4e\" returns successfully" Apr 17 23:30:21.915037 kubelet[2685]: I0417 23:30:21.914196 2685 status_manager.go:895] "Failed to get status for pod" podUID="f345927f-53e5-4506-a004-6ebce958c278" pod="calico-system/calico-node-5fn97" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60964->10.0.0.2:2379: read: connection timed out" Apr 17 23:30:24.546111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56a5d37e087e6d7616ccc2262e4b86342c61dfd5f81a43bad446614a70732a4e-rootfs.mount: Deactivated successfully. Apr 17 23:30:24.552661 containerd[1590]: time="2026-04-17T23:30:24.552588312Z" level=info msg="shim disconnected" id=56a5d37e087e6d7616ccc2262e4b86342c61dfd5f81a43bad446614a70732a4e namespace=k8s.io Apr 17 23:30:24.553523 containerd[1590]: time="2026-04-17T23:30:24.553282628Z" level=warning msg="cleaning up after shim disconnected" id=56a5d37e087e6d7616ccc2262e4b86342c61dfd5f81a43bad446614a70732a4e namespace=k8s.io Apr 17 23:30:24.553523 containerd[1590]: time="2026-04-17T23:30:24.553313189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:24.566574 containerd[1590]: time="2026-04-17T23:30:24.566522071Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:30:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:30:24.859757 kubelet[2685]: E0417 23:30:24.857511 2685 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60844->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-9c3210a1b0.18a748cae9f39517 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-9c3210a1b0,UID:da1167cf3ad0bd910da87ef9ea134954,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-9c3210a1b0,},FirstTimestamp:2026-04-17 23:30:14.366598423 +0000 UTC m=+107.791353759,LastTimestamp:2026-04-17 23:30:14.366598423 +0000 UTC m=+107.791353759,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-9c3210a1b0,}"