Mar 14 00:11:34.887671 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 14 00:11:34.887696 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:11:34.887707 kernel: KASLR enabled Mar 14 00:11:34.887713 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:11:34.887719 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 14 00:11:34.887724 kernel: random: crng init done Mar 14 00:11:34.887731 kernel: ACPI: Early table checksum verification disabled Mar 14 00:11:34.887737 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 14 00:11:34.887744 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:11:34.887751 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887758 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887764 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887770 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887776 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887783 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887791 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887798 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887805 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:11:34.887811 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:11:34.887817 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 14 00:11:34.887824 kernel: NUMA: Failed to initialise from firmware Mar 14 00:11:34.887830 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:11:34.887837 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 14 00:11:34.887843 kernel: Zone ranges: Mar 14 00:11:34.887849 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:11:34.887857 kernel: DMA32 empty Mar 14 00:11:34.887864 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 14 00:11:34.887870 kernel: Movable zone start for each node Mar 14 00:11:34.887876 kernel: Early memory node ranges Mar 14 00:11:34.887883 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 14 00:11:34.887889 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 14 00:11:34.887895 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 14 00:11:34.887902 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 14 00:11:34.887908 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 14 00:11:34.887915 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 14 00:11:34.887921 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 14 00:11:34.887927 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:11:34.887935 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:11:34.887942 kernel: psci: probing for conduit method from ACPI. Mar 14 00:11:34.887948 kernel: psci: PSCIv1.1 detected in firmware. Mar 14 00:11:34.887958 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:11:34.887965 kernel: psci: Trusted OS migration not required Mar 14 00:11:34.887972 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:11:34.887980 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 14 00:11:34.887987 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:11:34.887994 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:11:34.888000 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:11:34.888007 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:11:34.888014 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:11:34.888021 kernel: CPU features: detected: Hardware dirty bit management Mar 14 00:11:34.888027 kernel: CPU features: detected: Spectre-v4 Mar 14 00:11:34.888034 kernel: CPU features: detected: Spectre-BHB Mar 14 00:11:34.888041 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 14 00:11:34.888050 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 14 00:11:34.888056 kernel: CPU features: detected: ARM erratum 1418040 Mar 14 00:11:34.888063 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 14 00:11:34.888070 kernel: alternatives: applying boot alternatives Mar 14 00:11:34.888078 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:11:34.888085 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:11:34.888092 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:11:34.888099 kernel: Fallback order for Node 0: 0 Mar 14 00:11:34.888106 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 14 00:11:34.888112 kernel: Policy zone: Normal Mar 14 00:11:34.888119 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:11:34.888127 kernel: software IO TLB: area num 2. Mar 14 00:11:34.888134 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 14 00:11:34.888142 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Mar 14 00:11:34.888149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:11:34.888156 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:11:34.888167 kernel: rcu: RCU event tracing is enabled. Mar 14 00:11:34.888174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:11:34.888181 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:11:34.888188 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:11:34.888195 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:11:34.888202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:11:34.888208 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:11:34.888217 kernel: GICv3: 256 SPIs implemented Mar 14 00:11:34.888223 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:11:34.888230 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:11:34.888237 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 14 00:11:34.888244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 14 00:11:34.888251 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 14 00:11:34.888258 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:11:34.888265 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:11:34.889821 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 14 00:11:34.890093 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 14 00:11:34.890102 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:11:34.890115 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:11:34.890123 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 14 00:11:34.890130 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 14 00:11:34.890137 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 14 00:11:34.890144 kernel: Console: colour dummy device 80x25 Mar 14 00:11:34.890151 kernel: ACPI: Core revision 20230628 Mar 14 00:11:34.890159 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 14 00:11:34.890166 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:11:34.890173 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:11:34.890180 kernel: landlock: Up and running. Mar 14 00:11:34.890189 kernel: SELinux: Initializing. Mar 14 00:11:34.890197 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:11:34.890204 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:11:34.890211 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:11:34.890219 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:11:34.890226 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:11:34.890233 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:11:34.890241 kernel: Platform MSI: ITS@0x8080000 domain created Mar 14 00:11:34.890248 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 14 00:11:34.890257 kernel: Remapping and enabling EFI services. Mar 14 00:11:34.890264 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:11:34.890271 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:11:34.891967 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 14 00:11:34.891977 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 14 00:11:34.891984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:11:34.891992 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 14 00:11:34.891999 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:11:34.892007 kernel: SMP: Total of 2 processors activated. Mar 14 00:11:34.892021 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:11:34.892028 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 14 00:11:34.892036 kernel: CPU features: detected: Common not Private translations Mar 14 00:11:34.892049 kernel: CPU features: detected: CRC32 instructions Mar 14 00:11:34.892058 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 14 00:11:34.892065 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 14 00:11:34.892073 kernel: CPU features: detected: LSE atomic instructions Mar 14 00:11:34.892080 kernel: CPU features: detected: Privileged Access Never Mar 14 00:11:34.892088 kernel: CPU features: detected: RAS Extension Support Mar 14 00:11:34.892098 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 14 00:11:34.892108 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:11:34.892118 kernel: alternatives: applying system-wide alternatives Mar 14 00:11:34.892126 kernel: devtmpfs: initialized Mar 14 00:11:34.892134 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:11:34.892141 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:11:34.892149 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:11:34.892156 kernel: SMBIOS 3.0.0 present. Mar 14 00:11:34.892166 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 14 00:11:34.892173 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:11:34.892181 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:11:34.892189 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:11:34.892196 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:11:34.892204 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:11:34.892212 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Mar 14 00:11:34.892219 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:11:34.892227 kernel: cpuidle: using governor menu Mar 14 00:11:34.892236 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:11:34.892243 kernel: ASID allocator initialised with 32768 entries Mar 14 00:11:34.892251 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:11:34.892259 kernel: Serial: AMBA PL011 UART driver Mar 14 00:11:34.892267 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 14 00:11:34.892284 kernel: Modules: 0 pages in range for non-PLT usage Mar 14 00:11:34.892292 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:11:34.892300 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:11:34.892307 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:11:34.892317 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:11:34.892324 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:11:34.892332 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:11:34.892339 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:11:34.892346 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:11:34.892354 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:11:34.892361 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:11:34.892369 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:11:34.892377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:11:34.892386 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:11:34.892394 kernel: ACPI: Interpreter enabled Mar 14 00:11:34.892401 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:11:34.892409 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:11:34.892417 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 14 00:11:34.892424 kernel: printk: console [ttyAMA0] enabled Mar 14 00:11:34.892431 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:11:34.892652 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:11:34.892739 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:11:34.892808 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:11:34.892874 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 14 00:11:34.892940 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 14 00:11:34.892950 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 14 00:11:34.892958 kernel: PCI host bridge to bus 0000:00 Mar 14 00:11:34.893031 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 14 00:11:34.893095 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:11:34.893155 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 14 00:11:34.893215 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:11:34.893665 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 14 00:11:34.893770 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 14 00:11:34.893842 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 14 00:11:34.893911 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:11:34.893995 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.894065 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 14 00:11:34.894145 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.894215 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 14 00:11:34.894308 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.894381 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 14 00:11:34.894467 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.894570 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 14 00:11:34.894659 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.894730 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 14 00:11:34.894806 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.894875 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 14 00:11:34.894956 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.895025 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 14 00:11:34.895100 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.895168 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 14 00:11:34.895242 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:11:34.895534 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 14 00:11:34.895706 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 14 00:11:34.895781 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 14 00:11:34.895863 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:11:34.895935 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 14 00:11:34.896006 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:11:34.896076 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:11:34.896161 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:11:34.896236 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 14 00:11:34.896366 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:11:34.896443 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 14 00:11:34.896513 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 14 00:11:34.896608 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:11:34.896682 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 14 00:11:34.896805 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:11:34.896908 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 14 00:11:34.896991 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 14 00:11:34.897072 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:11:34.897146 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 14 00:11:34.897217 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:11:34.899395 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:11:34.899496 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 14 00:11:34.899589 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 14 00:11:34.899663 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:11:34.899739 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 14 00:11:34.899809 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:11:34.899877 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:11:34.899960 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 14 00:11:34.900029 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 14 00:11:34.900097 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 14 00:11:34.900169 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 14 00:11:34.900238 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:11:34.902401 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:11:34.902494 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 14 00:11:34.902587 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 14 00:11:34.902665 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 14 00:11:34.902738 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 14 00:11:34.902808 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:11:34.902876 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:11:34.902949 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 14 00:11:34.903016 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:11:34.903083 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:11:34.903157 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:11:34.903225 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:11:34.903328 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:11:34.903404 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:11:34.903471 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:11:34.903567 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:11:34.903651 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:11:34.903722 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:11:34.903795 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:11:34.903869 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 14 00:11:34.903937 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:11:34.904008 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 14 00:11:34.904075 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:11:34.904144 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 14 00:11:34.904215 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:11:34.905041 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 14 00:11:34.905150 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:11:34.905224 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 14 00:11:34.907401 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:11:34.907498 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 14 00:11:34.907591 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:11:34.907676 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 14 00:11:34.907745 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:11:34.907816 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 14 00:11:34.907883 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:11:34.907954 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 14 00:11:34.908032 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:11:34.908107 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 14 00:11:34.908181 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 14 00:11:34.908252 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 14 00:11:34.908336 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:11:34.908412 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 14 00:11:34.908481 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:11:34.908591 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 14 00:11:34.908669 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:11:34.908742 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 14 00:11:34.908815 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 14 00:11:34.908885 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 14 00:11:34.908953 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 14 00:11:34.909022 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 14 00:11:34.909089 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 14 00:11:34.909158 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 14 00:11:34.909226 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 14 00:11:34.909388 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 14 00:11:34.909469 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 14 00:11:34.909550 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 14 00:11:34.909626 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 14 00:11:34.909699 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 14 00:11:34.909775 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 14 00:11:34.909845 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:11:34.909914 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 14 00:11:34.909983 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:11:34.910054 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 14 00:11:34.910120 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 14 00:11:34.910186 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:11:34.910262 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 14 00:11:34.910367 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:11:34.910437 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 14 00:11:34.910504 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 14 00:11:34.910608 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:11:34.910690 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:11:34.910762 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 14 00:11:34.910831 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:11:34.910898 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 14 00:11:34.910971 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 14 00:11:34.911038 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:11:34.911112 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:11:34.911181 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:11:34.911249 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 14 00:11:34.911493 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 14 00:11:34.911595 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:11:34.911675 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 14 00:11:34.911752 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 14 00:11:34.911820 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:11:34.911887 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 14 00:11:34.911954 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 14 00:11:34.912021 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:11:34.912096 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 14 00:11:34.912165 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 14 00:11:34.912235 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:11:34.912322 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 14 00:11:34.912390 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 14 00:11:34.912457 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:11:34.912533 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 14 00:11:34.912651 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 14 00:11:34.912725 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 14 00:11:34.912795 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:11:34.912864 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 14 00:11:34.912937 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 14 00:11:34.913005 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:11:34.913075 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:11:34.913143 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 14 00:11:34.913211 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 14 00:11:34.913373 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:11:34.913473 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:11:34.913558 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 14 00:11:34.913647 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 14 00:11:34.913714 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:11:34.913810 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 14 00:11:34.913885 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:11:34.913946 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 14 00:11:34.914018 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 14 00:11:34.914081 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 14 00:11:34.914146 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:11:34.914215 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 14 00:11:34.914287 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 14 00:11:34.914351 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:11:34.914423 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 14 00:11:34.914485 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 14 00:11:34.914567 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:11:34.914643 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 14 00:11:34.914707 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 14 00:11:34.914788 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:11:34.914864 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 14 00:11:34.914928 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 14 00:11:34.914990 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:11:34.915062 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 14 00:11:34.915125 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 14 00:11:34.915191 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:11:34.915261 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 14 00:11:34.915418 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 14 00:11:34.915487 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:11:34.915603 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 14 00:11:34.915676 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 14 00:11:34.915739 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:11:34.915809 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 14 00:11:34.915878 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 14 00:11:34.915943 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:11:34.915953 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:11:34.915961 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:11:34.915970 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:11:34.915978 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:11:34.915985 kernel: iommu: Default domain type: Translated Mar 14 00:11:34.915993 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:11:34.916001 kernel: efivars: Registered efivars operations Mar 14 00:11:34.916009 kernel: vgaarb: loaded Mar 14 00:11:34.916019 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:11:34.916027 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:11:34.916035 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:11:34.916043 kernel: pnp: PnP ACPI init Mar 14 00:11:34.916117 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 14 00:11:34.916129 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:11:34.916137 kernel: NET: Registered PF_INET protocol family Mar 14 00:11:34.916145 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:11:34.916155 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:11:34.916164 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:11:34.916172 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:11:34.916180 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:11:34.916188 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:11:34.916196 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:11:34.916204 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:11:34.916212 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:11:34.916303 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 14 00:11:34.916318 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:11:34.916326 kernel: kvm [1]: HYP mode not available Mar 14 00:11:34.916334 kernel: Initialise system trusted keyrings Mar 14 00:11:34.916342 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:11:34.916350 kernel: Key type asymmetric registered Mar 14 00:11:34.916358 kernel: Asymmetric key parser 'x509' registered Mar 14 00:11:34.916366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:11:34.916374 kernel: io scheduler mq-deadline registered Mar 14 00:11:34.916382 kernel: io scheduler kyber registered Mar 14 00:11:34.916392 kernel: io scheduler bfq registered Mar 14 00:11:34.916400 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:11:34.916475 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 14 00:11:34.916554 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 14 00:11:34.916626 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.916701 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 14 00:11:34.916769 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 14 00:11:34.916841 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.916912 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 14 00:11:34.916980 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 14 00:11:34.917048 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.917117 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 14 00:11:34.917186 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 14 00:11:34.917259 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.917354 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 14 00:11:34.917435 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 14 00:11:34.917504 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.917592 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 14 00:11:34.917665 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 14 00:11:34.917738 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.917808 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 14 00:11:34.917877 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 14 00:11:34.917945 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.918015 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 14 00:11:34.918083 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 14 00:11:34.918154 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.918165 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 14 00:11:34.918234 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 14 00:11:34.918386 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 14 00:11:34.918461 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:11:34.918472 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:11:34.918487 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:11:34.918496 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:11:34.918613 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 14 00:11:34.918694 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 14 00:11:34.918706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:11:34.918714 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:11:34.918782 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 14 00:11:34.918794 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 14 00:11:34.918802 kernel: thunder_xcv, ver 1.0 Mar 14 00:11:34.918813 kernel: thunder_bgx, ver 1.0 Mar 14 00:11:34.918822 kernel: nicpf, ver 1.0 Mar 14 00:11:34.918829 kernel: nicvf, ver 1.0 Mar 14 00:11:34.918938 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:11:34.919028 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:11:34 UTC (1773447094) Mar 14 00:11:34.919040 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:11:34.919049 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 14 00:11:34.919057 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:11:34.919068 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:11:34.919076 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:11:34.919084 kernel: Segment Routing with IPv6 Mar 14 00:11:34.919092 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:11:34.919100 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:11:34.919107 kernel: Key type dns_resolver registered Mar 14 00:11:34.919115 kernel: registered taskstats version 1 Mar 14 00:11:34.919123 kernel: Loading compiled-in X.509 certificates Mar 14 00:11:34.919131 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:11:34.919141 kernel: Key type .fscrypt registered Mar 14 00:11:34.919149 kernel: Key type fscrypt-provisioning registered Mar 14 00:11:34.919157 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:11:34.919165 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:11:34.919173 kernel: ima: No architecture policies found Mar 14 00:11:34.919181 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:11:34.919189 kernel: clk: Disabling unused clocks Mar 14 00:11:34.919196 kernel: Freeing unused kernel memory: 39424K Mar 14 00:11:34.919204 kernel: Run /init as init process Mar 14 00:11:34.919214 kernel: with arguments: Mar 14 00:11:34.919222 kernel: /init Mar 14 00:11:34.919230 kernel: with environment: Mar 14 00:11:34.919238 kernel: HOME=/ Mar 14 00:11:34.919246 kernel: TERM=linux Mar 14 00:11:34.919256 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:11:34.919266 systemd[1]: Detected virtualization kvm. Mar 14 00:11:34.919349 systemd[1]: Detected architecture arm64. Mar 14 00:11:34.919362 systemd[1]: Running in initrd. Mar 14 00:11:34.919372 systemd[1]: No hostname configured, using default hostname. Mar 14 00:11:34.919380 systemd[1]: Hostname set to . Mar 14 00:11:34.919389 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:11:34.919397 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:11:34.919406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:11:34.919415 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:11:34.919424 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:11:34.919434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:11:34.919443 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:11:34.919451 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:11:34.919461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:11:34.919470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:11:34.919479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:11:34.919487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:11:34.919497 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:11:34.919507 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:11:34.919516 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:11:34.919524 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:11:34.919532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:11:34.919553 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:11:34.919562 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:11:34.919571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:11:34.919582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:11:34.919590 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:11:34.919599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:11:34.919607 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:11:34.919616 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:11:34.919624 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:11:34.919633 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:11:34.919641 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:11:34.919650 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:11:34.919660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:11:34.919668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:11:34.919676 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:11:34.919685 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:11:34.919721 systemd-journald[236]: Collecting audit messages is disabled. Mar 14 00:11:34.919745 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:11:34.919754 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:11:34.919763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:11:34.919773 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:11:34.919782 kernel: Bridge firewalling registered Mar 14 00:11:34.919790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:11:34.919799 systemd-journald[236]: Journal started Mar 14 00:11:34.919819 systemd-journald[236]: Runtime Journal (/run/log/journal/711ee1ab781c483e83d5cbf1388fb164) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:11:34.895651 systemd-modules-load[237]: Inserted module 'overlay' Mar 14 00:11:34.920355 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 14 00:11:34.923712 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:11:34.934854 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:11:34.936779 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:11:34.945533 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:11:34.948458 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:11:34.952736 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:11:34.954767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:11:34.968600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:11:34.972507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:11:34.980493 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:11:34.981210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:11:34.986439 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:11:34.997639 dracut-cmdline[275]: dracut-dracut-053 Mar 14 00:11:35.002619 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:11:35.027420 systemd-resolved[277]: Positive Trust Anchors: Mar 14 00:11:35.027434 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:11:35.027468 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:11:35.037829 systemd-resolved[277]: Defaulting to hostname 'linux'. Mar 14 00:11:35.040610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:11:35.041306 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:11:35.096315 kernel: SCSI subsystem initialized Mar 14 00:11:35.101318 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:11:35.109357 kernel: iscsi: registered transport (tcp) Mar 14 00:11:35.122574 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:11:35.122701 kernel: QLogic iSCSI HBA Driver Mar 14 00:11:35.174687 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:11:35.180471 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:11:35.200832 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:11:35.200952 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:11:35.200978 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:11:35.250362 kernel: raid6: neonx8 gen() 15568 MB/s Mar 14 00:11:35.267344 kernel: raid6: neonx4 gen() 15515 MB/s Mar 14 00:11:35.284329 kernel: raid6: neonx2 gen() 13082 MB/s Mar 14 00:11:35.301340 kernel: raid6: neonx1 gen() 10374 MB/s Mar 14 00:11:35.318345 kernel: raid6: int64x8 gen() 6900 MB/s Mar 14 00:11:35.335348 kernel: raid6: int64x4 gen() 7305 MB/s Mar 14 00:11:35.352342 kernel: raid6: int64x2 gen() 6077 MB/s Mar 14 00:11:35.369349 kernel: raid6: int64x1 gen() 5008 MB/s Mar 14 00:11:35.369426 kernel: raid6: using algorithm neonx8 gen() 15568 MB/s Mar 14 00:11:35.386334 kernel: raid6: .... xor() 11889 MB/s, rmw enabled Mar 14 00:11:35.386406 kernel: raid6: using neon recovery algorithm Mar 14 00:11:35.391317 kernel: xor: measuring software checksum speed Mar 14 00:11:35.391373 kernel: 8regs : 19793 MB/sec Mar 14 00:11:35.391395 kernel: 32regs : 17213 MB/sec Mar 14 00:11:35.392388 kernel: arm64_neon : 26927 MB/sec Mar 14 00:11:35.392441 kernel: xor: using function: arm64_neon (26927 MB/sec) Mar 14 00:11:35.443328 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:11:35.458302 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:11:35.465529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:11:35.479227 systemd-udevd[459]: Using default interface naming scheme 'v255'. Mar 14 00:11:35.483444 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:11:35.490662 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:11:35.510928 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Mar 14 00:11:35.545827 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:11:35.555608 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:11:35.608334 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:11:35.617598 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:11:35.638319 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:11:35.639222 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:11:35.642080 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:11:35.643163 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:11:35.652538 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:11:35.672955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:11:35.740995 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:11:35.738760 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:11:35.747583 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:11:35.747635 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:11:35.738894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:11:35.743601 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:11:35.744263 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:11:35.744444 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:11:35.745122 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:11:35.754851 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:11:35.765643 kernel: ACPI: bus type USB registered Mar 14 00:11:35.765705 kernel: usbcore: registered new interface driver usbfs Mar 14 00:11:35.765719 kernel: usbcore: registered new interface driver hub Mar 14 00:11:35.767343 kernel: usbcore: registered new device driver usb Mar 14 00:11:35.785724 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 14 00:11:35.785940 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 14 00:11:35.786368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:11:35.790295 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:11:35.791481 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:11:35.794441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:11:35.801313 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:11:35.801521 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:11:35.803330 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:11:35.805790 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:11:35.805950 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:11:35.809331 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:11:35.809492 kernel: hub 1-0:1.0: USB hub found Mar 14 00:11:35.809628 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:11:35.809715 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:11:35.809810 kernel: hub 2-0:1.0: USB hub found Mar 14 00:11:35.809957 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:11:35.816449 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 14 00:11:35.818702 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 14 00:11:35.818880 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 14 00:11:35.818967 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 14 00:11:35.820175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:11:35.821049 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:11:35.827194 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:11:35.827236 kernel: GPT:17805311 != 80003071 Mar 14 00:11:35.827246 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:11:35.827256 kernel: GPT:17805311 != 80003071 Mar 14 00:11:35.827265 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:11:35.827287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:11:35.830305 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 14 00:11:35.865428 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (523) Mar 14 00:11:35.870368 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (516) Mar 14 00:11:35.872488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:11:35.885765 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:11:35.896392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:11:35.903295 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:11:35.904092 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:11:35.910511 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:11:35.916264 disk-uuid[575]: Primary Header is updated. Mar 14 00:11:35.916264 disk-uuid[575]: Secondary Entries is updated. Mar 14 00:11:35.916264 disk-uuid[575]: Secondary Header is updated. Mar 14 00:11:35.923516 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:11:35.927358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:11:36.053303 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:11:36.187297 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 14 00:11:36.187352 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:11:36.188306 kernel: usbcore: registered new interface driver usbhid Mar 14 00:11:36.188334 kernel: usbhid: USB HID core driver Mar 14 00:11:36.297326 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 14 00:11:36.426312 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 14 00:11:36.481209 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 14 00:11:36.936402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:11:36.936456 disk-uuid[576]: The operation has completed successfully. Mar 14 00:11:36.986222 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:11:36.986348 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:11:37.000455 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:11:37.016100 sh[593]: Success Mar 14 00:11:37.028360 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:11:37.075842 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:11:37.083912 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:11:37.086603 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:11:37.099837 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:11:37.099893 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:11:37.099911 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:11:37.099929 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:11:37.099945 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:11:37.106318 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:11:37.108026 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:11:37.111096 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:11:37.117497 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:11:37.120939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:11:37.136772 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:11:37.136839 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:11:37.136857 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:11:37.143704 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:11:37.143780 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:11:37.157028 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:11:37.158465 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:11:37.165345 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:11:37.171481 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:11:37.265261 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:11:37.272360 ignition[693]: Ignition 2.19.0 Mar 14 00:11:37.272369 ignition[693]: Stage: fetch-offline Mar 14 00:11:37.276471 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:11:37.272404 ignition[693]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:37.277219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:11:37.272411 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:37.272579 ignition[693]: parsed url from cmdline: "" Mar 14 00:11:37.272583 ignition[693]: no config URL provided Mar 14 00:11:37.272588 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:11:37.272595 ignition[693]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:11:37.272600 ignition[693]: failed to fetch config: resource requires networking Mar 14 00:11:37.272881 ignition[693]: Ignition finished successfully Mar 14 00:11:37.304690 systemd-networkd[780]: lo: Link UP Mar 14 00:11:37.304701 systemd-networkd[780]: lo: Gained carrier Mar 14 00:11:37.306246 systemd-networkd[780]: Enumeration completed Mar 14 00:11:37.306999 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:37.307002 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:11:37.307018 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:11:37.308129 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:37.308132 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:11:37.308983 systemd-networkd[780]: eth0: Link UP Mar 14 00:11:37.308986 systemd-networkd[780]: eth0: Gained carrier Mar 14 00:11:37.308993 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:37.310339 systemd[1]: Reached target network.target - Network. Mar 14 00:11:37.312246 systemd-networkd[780]: eth1: Link UP Mar 14 00:11:37.312249 systemd-networkd[780]: eth1: Gained carrier Mar 14 00:11:37.312256 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:37.317499 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:11:37.330934 ignition[783]: Ignition 2.19.0 Mar 14 00:11:37.330944 ignition[783]: Stage: fetch Mar 14 00:11:37.331140 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:37.331149 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:37.331238 ignition[783]: parsed url from cmdline: "" Mar 14 00:11:37.331242 ignition[783]: no config URL provided Mar 14 00:11:37.331246 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:11:37.331253 ignition[783]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:11:37.331291 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:11:37.331830 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:11:37.363386 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:11:37.376389 systemd-networkd[780]: eth0: DHCPv4 address 188.245.55.47/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:11:37.532007 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:11:37.538129 ignition[783]: GET result: OK Mar 14 00:11:37.538367 ignition[783]: parsing config with SHA512: 4195b1f177b4b378b94aeee466b30c93683f15a20c807c14be6bcbd71bf0951416ca94bc85e8bbf3473c8b9ed206156e78258794c6bea8dfb2c3e70c197505cb Mar 14 00:11:37.544660 unknown[783]: fetched base config from "system" Mar 14 00:11:37.544669 unknown[783]: fetched base config from "system" Mar 14 00:11:37.545025 ignition[783]: fetch: fetch complete Mar 14 00:11:37.544677 unknown[783]: fetched user config from "hetzner" Mar 14 00:11:37.545029 ignition[783]: fetch: fetch passed Mar 14 00:11:37.545071 ignition[783]: Ignition finished successfully Mar 14 00:11:37.549522 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:11:37.555445 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:11:37.567676 ignition[791]: Ignition 2.19.0 Mar 14 00:11:37.567686 ignition[791]: Stage: kargs Mar 14 00:11:37.567856 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:37.567866 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:37.571690 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:11:37.568758 ignition[791]: kargs: kargs passed Mar 14 00:11:37.568803 ignition[791]: Ignition finished successfully Mar 14 00:11:37.576588 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:11:37.589879 ignition[798]: Ignition 2.19.0 Mar 14 00:11:37.589889 ignition[798]: Stage: disks Mar 14 00:11:37.590050 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:37.590059 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:37.591009 ignition[798]: disks: disks passed Mar 14 00:11:37.591058 ignition[798]: Ignition finished successfully Mar 14 00:11:37.593199 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:11:37.594707 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:11:37.595526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:11:37.596823 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:11:37.597968 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:11:37.599034 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:11:37.604501 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:11:37.622090 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:11:37.625568 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:11:37.631445 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:11:37.693295 kernel: EXT4-fs (sda9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:11:37.694026 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:11:37.695202 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:11:37.703453 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:11:37.707118 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:11:37.712550 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:11:37.715234 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:11:37.715294 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:11:37.717182 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:11:37.722291 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Mar 14 00:11:37.724298 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:11:37.724335 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:11:37.724346 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:11:37.724799 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:11:37.731372 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:11:37.731422 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:11:37.733324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:11:37.790180 coreos-metadata[816]: Mar 14 00:11:37.789 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:11:37.792003 coreos-metadata[816]: Mar 14 00:11:37.791 INFO Fetch successful Mar 14 00:11:37.792003 coreos-metadata[816]: Mar 14 00:11:37.791 INFO wrote hostname ci-4081-3-6-n-8cab04691e to /sysroot/etc/hostname Mar 14 00:11:37.795350 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:11:37.797566 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:11:37.802402 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:11:37.807187 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:11:37.812982 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:11:37.914579 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:11:37.919408 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:11:37.921298 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:11:37.933323 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:11:37.951166 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:11:37.956317 ignition[932]: INFO : Ignition 2.19.0 Mar 14 00:11:37.956317 ignition[932]: INFO : Stage: mount Mar 14 00:11:37.956317 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:37.956317 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:37.958617 ignition[932]: INFO : mount: mount passed Mar 14 00:11:37.958617 ignition[932]: INFO : Ignition finished successfully Mar 14 00:11:37.959823 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:11:37.965412 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:11:38.099934 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:11:38.109913 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:11:38.117480 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Mar 14 00:11:38.117568 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:11:38.119375 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:11:38.119436 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:11:38.124092 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:11:38.124149 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:11:38.127613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:11:38.160080 ignition[959]: INFO : Ignition 2.19.0 Mar 14 00:11:38.161143 ignition[959]: INFO : Stage: files Mar 14 00:11:38.161143 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:38.161143 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:38.164262 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:11:38.164262 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:11:38.164262 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:11:38.168576 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:11:38.169890 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:11:38.171177 unknown[959]: wrote ssh authorized keys file for user: core Mar 14 00:11:38.172450 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:11:38.174325 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:11:38.176049 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:11:38.273703 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:11:38.352565 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:11:38.353757 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:11:38.366669 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:11:38.366669 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:11:38.366669 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-arm64.raw: attempt #1 Mar 14 00:11:38.505545 systemd-networkd[780]: eth1: Gained IPv6LL Mar 14 00:11:38.642593 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:11:38.904746 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-arm64.raw" Mar 14 00:11:38.904746 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:11:38.907359 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:11:38.907359 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:11:38.907359 ignition[959]: INFO : files: files passed Mar 14 00:11:38.907359 ignition[959]: INFO : Ignition finished successfully Mar 14 00:11:38.910641 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:11:38.919575 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:11:38.924673 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:11:38.928230 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:11:38.928385 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:11:38.944817 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:11:38.944817 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:11:38.947645 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:11:38.951396 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:11:38.952309 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:11:38.961957 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:11:38.990086 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:11:38.990228 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:11:38.991817 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:11:38.993039 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:11:38.994104 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:11:38.998604 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:11:39.016433 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:11:39.018579 systemd-networkd[780]: eth0: Gained IPv6LL Mar 14 00:11:39.031646 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:11:39.047174 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:11:39.048068 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:11:39.049337 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:11:39.050470 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:11:39.050637 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:11:39.052090 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:11:39.052807 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:11:39.053901 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:11:39.054952 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:11:39.056011 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:11:39.057217 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:11:39.058365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:11:39.059665 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:11:39.060763 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:11:39.061868 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:11:39.062775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:11:39.062898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:11:39.064301 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:11:39.064952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:11:39.066023 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:11:39.069347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:11:39.070168 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:11:39.070314 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:11:39.072121 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:11:39.072249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:11:39.074111 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:11:39.074233 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:11:39.075435 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:11:39.075551 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:11:39.092606 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:11:39.098040 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:11:39.098620 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:11:39.098750 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:11:39.102700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:11:39.102807 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:11:39.107804 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:11:39.107890 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:11:39.115198 ignition[1011]: INFO : Ignition 2.19.0 Mar 14 00:11:39.115198 ignition[1011]: INFO : Stage: umount Mar 14 00:11:39.116237 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:11:39.116237 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:11:39.119382 ignition[1011]: INFO : umount: umount passed Mar 14 00:11:39.119382 ignition[1011]: INFO : Ignition finished successfully Mar 14 00:11:39.121077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:11:39.124042 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:11:39.126376 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:11:39.127682 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:11:39.127793 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:11:39.128761 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:11:39.128811 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:11:39.129838 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:11:39.129877 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:11:39.130939 systemd[1]: Stopped target network.target - Network. Mar 14 00:11:39.131887 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:11:39.131942 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:11:39.132980 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:11:39.133829 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:11:39.137424 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:11:39.139467 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:11:39.140941 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:11:39.141998 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:11:39.142071 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:11:39.143432 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:11:39.143509 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:11:39.144915 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:11:39.144988 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:11:39.146133 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:11:39.146178 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:11:39.147412 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:11:39.148840 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:11:39.150216 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:11:39.150329 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:11:39.152241 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:11:39.152331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:11:39.154351 systemd-networkd[780]: eth0: DHCPv6 lease lost Mar 14 00:11:39.159520 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:11:39.159670 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:11:39.160736 systemd-networkd[780]: eth1: DHCPv6 lease lost Mar 14 00:11:39.164755 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:11:39.165853 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:11:39.168346 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:11:39.168429 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:11:39.176509 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:11:39.177116 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:11:39.177216 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:11:39.179748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:11:39.179795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:11:39.181164 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:11:39.181216 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:11:39.181979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:11:39.182020 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:11:39.183125 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:11:39.200938 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:11:39.201113 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:11:39.203560 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:11:39.203656 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:11:39.205780 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:11:39.205858 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:11:39.207566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:11:39.207602 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:11:39.208616 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:11:39.208658 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:11:39.210142 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:11:39.210184 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:11:39.211786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:11:39.211836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:11:39.220827 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:11:39.221704 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:11:39.221774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:11:39.222767 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:11:39.222816 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:11:39.224207 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:11:39.224257 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:11:39.227113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:11:39.227175 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:11:39.231803 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:11:39.231928 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:11:39.233598 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:11:39.237536 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:11:39.250263 systemd[1]: Switching root. Mar 14 00:11:39.290017 systemd-journald[236]: Journal stopped Mar 14 00:11:40.237100 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 14 00:11:40.237191 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:11:40.237204 kernel: SELinux: policy capability open_perms=1 Mar 14 00:11:40.237214 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:11:40.237228 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:11:40.237238 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:11:40.237248 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:11:40.237257 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:11:40.237271 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:11:40.237314 kernel: audit: type=1403 audit(1773447099.444:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:11:40.237326 systemd[1]: Successfully loaded SELinux policy in 33.301ms. Mar 14 00:11:40.237339 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.759ms. Mar 14 00:11:40.237350 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:11:40.237362 systemd[1]: Detected virtualization kvm. Mar 14 00:11:40.237373 systemd[1]: Detected architecture arm64. Mar 14 00:11:40.237383 systemd[1]: Detected first boot. Mar 14 00:11:40.237394 systemd[1]: Hostname set to . Mar 14 00:11:40.237406 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:11:40.237418 zram_generator::config[1053]: No configuration found. Mar 14 00:11:40.237434 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:11:40.237444 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:11:40.237455 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:11:40.237465 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:11:40.237476 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:11:40.237525 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:11:40.237538 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:11:40.237552 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:11:40.237562 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:11:40.237572 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:11:40.237583 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:11:40.237593 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:11:40.237603 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:11:40.237614 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:11:40.237624 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:11:40.237636 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:11:40.237647 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:11:40.237657 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:11:40.237668 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 14 00:11:40.237679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:11:40.237691 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:11:40.237701 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:11:40.237713 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:11:40.237723 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:11:40.237733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:11:40.237748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:11:40.237758 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:11:40.237768 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:11:40.237778 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:11:40.237789 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:11:40.237805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:11:40.237817 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:11:40.237828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:11:40.237838 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:11:40.237848 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:11:40.237859 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:11:40.237869 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:11:40.237879 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:11:40.237889 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:11:40.237899 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:11:40.237912 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:11:40.237923 systemd[1]: Reached target machines.target - Containers. Mar 14 00:11:40.237933 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:11:40.237943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:11:40.237954 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:11:40.237969 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:11:40.237980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:11:40.237990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:11:40.238000 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:11:40.238010 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:11:40.238020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:11:40.238031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:11:40.238046 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:11:40.238058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:11:40.238069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:11:40.238079 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:11:40.238089 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:11:40.238100 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:11:40.238110 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:11:40.238121 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:11:40.238131 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:11:40.238141 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:11:40.238158 systemd[1]: Stopped verity-setup.service. Mar 14 00:11:40.238169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:11:40.238179 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:11:40.238190 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:11:40.238200 kernel: fuse: init (API version 7.39) Mar 14 00:11:40.238212 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:11:40.238222 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:11:40.238233 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:11:40.238244 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:11:40.238255 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:11:40.238265 kernel: ACPI: bus type drm_connector registered Mar 14 00:11:40.238321 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:11:40.238334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:11:40.238345 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:11:40.238358 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:11:40.238369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:11:40.238381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:11:40.238392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:11:40.238403 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:11:40.238415 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:11:40.238426 kernel: loop: module loaded Mar 14 00:11:40.238440 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:11:40.238452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:11:40.238463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:11:40.238473 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:11:40.238493 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:11:40.238506 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:11:40.238517 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:11:40.238531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:11:40.238542 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:11:40.238553 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:11:40.238563 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:11:40.238574 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:11:40.238585 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:11:40.238598 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:11:40.238646 systemd-journald[1116]: Collecting audit messages is disabled. Mar 14 00:11:40.238681 systemd-journald[1116]: Journal started Mar 14 00:11:40.238706 systemd-journald[1116]: Runtime Journal (/run/log/journal/711ee1ab781c483e83d5cbf1388fb164) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:11:40.242349 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:11:39.931498 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:11:39.949548 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:11:39.949963 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:11:40.254656 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:11:40.254738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:11:40.254753 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:11:40.258652 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:11:40.262778 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:11:40.268258 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:11:40.272377 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:11:40.274558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:11:40.279042 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:11:40.329585 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:11:40.338373 kernel: loop0: detected capacity change from 0 to 114328 Mar 14 00:11:40.338747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:11:40.341340 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:11:40.345982 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:11:40.356719 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:11:40.358754 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:11:40.370906 systemd-journald[1116]: Time spent on flushing to /var/log/journal/711ee1ab781c483e83d5cbf1388fb164 is 32.948ms for 1129 entries. Mar 14 00:11:40.370906 systemd-journald[1116]: System Journal (/var/log/journal/711ee1ab781c483e83d5cbf1388fb164) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:11:40.411526 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:11:40.411566 systemd-journald[1116]: Received client request to flush runtime journal. Mar 14 00:11:40.374557 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:11:40.421345 kernel: loop1: detected capacity change from 0 to 114432 Mar 14 00:11:40.379119 systemd-tmpfiles[1130]: ACLs are not supported, ignoring. Mar 14 00:11:40.379129 systemd-tmpfiles[1130]: ACLs are not supported, ignoring. Mar 14 00:11:40.379378 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:11:40.381750 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:11:40.393086 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:11:40.406841 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:11:40.413539 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:11:40.421221 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:11:40.431552 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:11:40.434375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:11:40.464771 kernel: loop2: detected capacity change from 0 to 8 Mar 14 00:11:40.470082 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:11:40.477002 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:11:40.485314 kernel: loop3: detected capacity change from 0 to 200864 Mar 14 00:11:40.500657 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 14 00:11:40.500679 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Mar 14 00:11:40.510449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:11:40.526294 kernel: loop4: detected capacity change from 0 to 114328 Mar 14 00:11:40.541349 kernel: loop5: detected capacity change from 0 to 114432 Mar 14 00:11:40.559307 kernel: loop6: detected capacity change from 0 to 8 Mar 14 00:11:40.562301 kernel: loop7: detected capacity change from 0 to 200864 Mar 14 00:11:40.586420 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:11:40.586914 (sd-merge)[1197]: Merged extensions into '/usr'. Mar 14 00:11:40.594422 systemd[1]: Reloading requested from client PID 1147 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:11:40.594442 systemd[1]: Reloading... Mar 14 00:11:40.705303 zram_generator::config[1219]: No configuration found. Mar 14 00:11:40.756379 ldconfig[1139]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:11:40.844776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:11:40.891663 systemd[1]: Reloading finished in 296 ms. Mar 14 00:11:40.915028 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:11:40.918747 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:11:40.927565 systemd[1]: Starting ensure-sysext.service... Mar 14 00:11:40.934724 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:11:40.940629 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:11:40.940649 systemd[1]: Reloading... Mar 14 00:11:40.985655 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:11:40.985928 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:11:40.989776 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:11:40.990116 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 14 00:11:40.991952 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 14 00:11:40.996657 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:11:40.996788 systemd-tmpfiles[1261]: Skipping /boot Mar 14 00:11:41.013652 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:11:41.013805 systemd-tmpfiles[1261]: Skipping /boot Mar 14 00:11:41.032329 zram_generator::config[1290]: No configuration found. Mar 14 00:11:41.130939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:11:41.177350 systemd[1]: Reloading finished in 236 ms. Mar 14 00:11:41.193806 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:11:41.201704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:11:41.212556 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:11:41.222252 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:11:41.229408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:11:41.237690 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:11:41.243645 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:11:41.249672 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:11:41.254849 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:11:41.259882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:11:41.273090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:11:41.278527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:11:41.279355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:11:41.284740 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:11:41.287259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:11:41.287442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:11:41.290584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:11:41.295584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:11:41.296895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:11:41.310166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:11:41.310366 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:11:41.312754 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:11:41.312884 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:11:41.315241 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Mar 14 00:11:41.316085 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:11:41.325812 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:11:41.325989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:11:41.335631 systemd[1]: Finished ensure-sysext.service. Mar 14 00:11:41.337014 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:11:41.337158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:11:41.340840 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:11:41.348523 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:11:41.350157 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:11:41.352747 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:11:41.355562 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:11:41.357557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:11:41.367298 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:11:41.385408 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:11:41.395966 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:11:41.396731 augenrules[1376]: No rules Mar 14 00:11:41.398388 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:11:41.419998 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:11:41.421651 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 14 00:11:41.457919 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:11:41.573554 systemd-networkd[1384]: lo: Link UP Mar 14 00:11:41.573567 systemd-networkd[1384]: lo: Gained carrier Mar 14 00:11:41.575435 systemd-networkd[1384]: Enumeration completed Mar 14 00:11:41.575568 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:11:41.596359 systemd-resolved[1331]: Positive Trust Anchors: Mar 14 00:11:41.613967 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1364) Mar 14 00:11:41.596375 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:11:41.596409 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:11:41.605111 systemd-resolved[1331]: Using system hostname 'ci-4081-3-6-n-8cab04691e'. Mar 14 00:11:41.611083 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:11:41.612705 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:11:41.613865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:11:41.614870 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:41.614881 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:11:41.616925 systemd[1]: Reached target network.target - Network. Mar 14 00:11:41.617646 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:41.617649 systemd-networkd[1384]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:11:41.618088 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:11:41.618917 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:11:41.620152 systemd-networkd[1384]: eth0: Link UP Mar 14 00:11:41.620159 systemd-networkd[1384]: eth0: Gained carrier Mar 14 00:11:41.620178 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:41.628360 systemd-networkd[1384]: eth1: Link UP Mar 14 00:11:41.628370 systemd-networkd[1384]: eth1: Gained carrier Mar 14 00:11:41.628393 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:41.664563 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:41.673310 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:11:41.678364 systemd-networkd[1384]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:11:41.679042 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:11:41.679213 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:11:41.680413 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 14 00:11:41.688662 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:11:41.691460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:11:41.694999 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:11:41.696159 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:11:41.696198 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:11:41.707220 systemd-networkd[1384]: eth0: DHCPv4 address 188.245.55.47/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:11:41.707591 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 14 00:11:41.708379 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 14 00:11:41.710797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:11:41.713332 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:11:41.714942 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:11:41.725661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:11:41.725864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:11:41.735181 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:11:41.735392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:11:41.736344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:11:41.744563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:11:41.748085 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:11:41.767548 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:11:41.770153 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 14 00:11:41.770221 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:11:41.770234 kernel: [drm] features: -context_init Mar 14 00:11:41.771545 kernel: [drm] number of scanouts: 1 Mar 14 00:11:41.771599 kernel: [drm] number of cap sets: 0 Mar 14 00:11:41.772977 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:11:41.779419 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:11:41.785308 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:11:41.801640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:11:41.804306 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:11:41.814356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:11:41.816319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:11:41.823604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:11:41.884492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:11:41.929821 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:11:41.943736 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:11:41.960295 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:11:41.990208 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:11:41.992398 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:11:41.993127 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:11:41.993944 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:11:41.994738 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:11:41.995826 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:11:41.996591 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:11:41.997298 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:11:41.998050 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:11:41.998089 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:11:41.998670 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:11:42.000392 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:11:42.002934 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:11:42.012716 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:11:42.015993 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:11:42.018112 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:11:42.018981 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:11:42.019619 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:11:42.020201 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:11:42.020240 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:11:42.023427 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:11:42.026791 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:11:42.034036 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:11:42.037523 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:11:42.047388 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:11:42.052309 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:11:42.056392 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:11:42.062593 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:11:42.064427 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:11:42.066115 jq[1448]: false Mar 14 00:11:42.078492 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:11:42.080058 dbus-daemon[1447]: [system] SELinux support is enabled Mar 14 00:11:42.084205 coreos-metadata[1446]: Mar 14 00:11:42.084 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:11:42.085370 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:11:42.088873 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:11:42.093872 coreos-metadata[1446]: Mar 14 00:11:42.093 INFO Fetch successful Mar 14 00:11:42.097495 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:11:42.099907 coreos-metadata[1446]: Mar 14 00:11:42.099 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:11:42.100195 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:11:42.100753 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:11:42.102137 coreos-metadata[1446]: Mar 14 00:11:42.102 INFO Fetch successful Mar 14 00:11:42.103635 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:11:42.111654 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:11:42.113644 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:11:42.120485 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:11:42.125807 extend-filesystems[1451]: Found loop4 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found loop5 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found loop6 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found loop7 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda1 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda2 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda3 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found usr Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda4 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda6 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda7 Mar 14 00:11:42.128054 extend-filesystems[1451]: Found sda9 Mar 14 00:11:42.128054 extend-filesystems[1451]: Checking size of /dev/sda9 Mar 14 00:11:42.128356 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:11:42.128775 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:11:42.134749 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:11:42.137363 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:11:42.153304 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:11:42.153361 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:11:42.155047 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:11:42.155065 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:11:42.164289 jq[1464]: true Mar 14 00:11:42.165760 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:11:42.166002 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:11:42.176965 extend-filesystems[1451]: Resized partition /dev/sda9 Mar 14 00:11:42.179254 tar[1474]: linux-arm64/LICENSE Mar 14 00:11:42.183240 tar[1474]: linux-arm64/helm Mar 14 00:11:42.187503 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:11:42.195308 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 14 00:11:42.202800 jq[1487]: true Mar 14 00:11:42.218804 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:11:42.223235 update_engine[1462]: I20260314 00:11:42.222635 1462 main.cc:92] Flatcar Update Engine starting Mar 14 00:11:42.235602 update_engine[1462]: I20260314 00:11:42.233632 1462 update_check_scheduler.cc:74] Next update check in 8m55s Mar 14 00:11:42.236650 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:11:42.240507 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:11:42.248770 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:11:42.251657 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:11:42.307291 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1382) Mar 14 00:11:42.327518 systemd-logind[1460]: New seat seat0. Mar 14 00:11:42.330291 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 14 00:11:42.343765 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:11:42.343792 systemd-logind[1460]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 14 00:11:42.344159 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:11:42.346327 extend-filesystems[1491]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:11:42.346327 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 14 00:11:42.346327 extend-filesystems[1491]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 14 00:11:42.354769 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:11:42.347711 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:11:42.354913 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Mar 14 00:11:42.354913 extend-filesystems[1451]: Found sr0 Mar 14 00:11:42.348259 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:11:42.354316 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:11:42.376931 systemd[1]: Starting sshkeys.service... Mar 14 00:11:42.423051 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:11:42.439675 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:11:42.495879 coreos-metadata[1525]: Mar 14 00:11:42.495 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:11:42.497433 coreos-metadata[1525]: Mar 14 00:11:42.497 INFO Fetch successful Mar 14 00:11:42.499159 unknown[1525]: wrote ssh authorized keys file for user: core Mar 14 00:11:42.532076 update-ssh-keys[1529]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:11:42.533027 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:11:42.543140 systemd[1]: Finished sshkeys.service. Mar 14 00:11:42.553695 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:11:42.621159 containerd[1492]: time="2026-03-14T00:11:42.621056280Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:11:42.690025 containerd[1492]: time="2026-03-14T00:11:42.689923960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.694649 containerd[1492]: time="2026-03-14T00:11:42.694600960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:11:42.694649 containerd[1492]: time="2026-03-14T00:11:42.694643520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:11:42.694743 containerd[1492]: time="2026-03-14T00:11:42.694660800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:11:42.694836 containerd[1492]: time="2026-03-14T00:11:42.694812160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:11:42.694866 containerd[1492]: time="2026-03-14T00:11:42.694834640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.694917 containerd[1492]: time="2026-03-14T00:11:42.694897680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:11:42.694917 containerd[1492]: time="2026-03-14T00:11:42.694913680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.695090 containerd[1492]: time="2026-03-14T00:11:42.695066440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:11:42.695090 containerd[1492]: time="2026-03-14T00:11:42.695086560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.695140 containerd[1492]: time="2026-03-14T00:11:42.695099720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:11:42.695140 containerd[1492]: time="2026-03-14T00:11:42.695109600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.695193 containerd[1492]: time="2026-03-14T00:11:42.695175960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.696494 containerd[1492]: time="2026-03-14T00:11:42.696441520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:11:42.696742 containerd[1492]: time="2026-03-14T00:11:42.696591760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:11:42.696742 containerd[1492]: time="2026-03-14T00:11:42.696611000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:11:42.696742 containerd[1492]: time="2026-03-14T00:11:42.696709520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:11:42.696819 containerd[1492]: time="2026-03-14T00:11:42.696754560Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:11:42.703323 containerd[1492]: time="2026-03-14T00:11:42.703110280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:11:42.703323 containerd[1492]: time="2026-03-14T00:11:42.703162960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:11:42.703323 containerd[1492]: time="2026-03-14T00:11:42.703180200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:11:42.703323 containerd[1492]: time="2026-03-14T00:11:42.703196440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:11:42.703323 containerd[1492]: time="2026-03-14T00:11:42.703210840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:11:42.703504 containerd[1492]: time="2026-03-14T00:11:42.703368880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703622240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703729040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703758880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703772040Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703785680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703798560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703811400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703826880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703841720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703853520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.703858 containerd[1492]: time="2026-03-14T00:11:42.703864440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703876880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703895720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703908760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703926680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703939120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703954400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703968520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703980160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.703993160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.704006160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.704019440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.704031320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.704042080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704059 containerd[1492]: time="2026-03-14T00:11:42.704053720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704071720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704092360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704103720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704114520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704228640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704271480Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704300040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704312000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704321760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704333560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704343400Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:11:42.704357 containerd[1492]: time="2026-03-14T00:11:42.704353920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.706712240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.706827160Z" level=info msg="Connect containerd service" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.707100200Z" level=info msg="using legacy CRI server" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.707127920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.707248440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.707991000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.708877680Z" level=info msg="Start subscribing containerd event" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.708937000Z" level=info msg="Start recovering state" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.709027200Z" level=info msg="Start event monitor" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.709039360Z" level=info msg="Start snapshots syncer" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.709050640Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:11:42.709270 containerd[1492]: time="2026-03-14T00:11:42.709062120Z" level=info msg="Start streaming server" Mar 14 00:11:42.711554 containerd[1492]: time="2026-03-14T00:11:42.711529000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:11:42.711973 containerd[1492]: time="2026-03-14T00:11:42.711948040Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:11:42.712395 containerd[1492]: time="2026-03-14T00:11:42.712378720Z" level=info msg="containerd successfully booted in 0.094564s" Mar 14 00:11:42.712510 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:11:42.855640 tar[1474]: linux-arm64/README.md Mar 14 00:11:42.867330 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:11:43.030122 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:11:43.053540 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:11:43.059748 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:11:43.072136 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:11:43.072532 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:11:43.085106 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:11:43.095044 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:11:43.104909 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:11:43.112789 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 14 00:11:43.114535 systemd-networkd[1384]: eth0: Gained IPv6LL Mar 14 00:11:43.115318 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 14 00:11:43.115837 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:11:43.119783 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:11:43.121520 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:11:43.132742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:11:43.137870 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:11:43.171172 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:11:43.625565 systemd-networkd[1384]: eth1: Gained IPv6LL Mar 14 00:11:43.626292 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 14 00:11:43.901571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:11:43.903994 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:11:43.904607 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:11:43.906227 systemd[1]: Startup finished in 786ms (kernel) + 4.759s (initrd) + 4.495s (userspace) = 10.040s. Mar 14 00:11:44.338667 kubelet[1576]: E0314 00:11:44.338545 1576 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:11:44.340855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:11:44.341174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:11:54.592042 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:11:54.598589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:11:54.721418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:11:54.733093 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:11:54.782325 kubelet[1595]: E0314 00:11:54.782137 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:11:54.785767 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:11:54.785946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:05.037037 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:12:05.044599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:05.197581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:05.197826 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:05.245131 kubelet[1610]: E0314 00:12:05.245084 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:05.248407 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:05.248653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:13.989318 systemd-timesyncd[1357]: Contacted time server 139.162.187.236:123 (2.flatcar.pool.ntp.org). Mar 14 00:12:13.989437 systemd-timesyncd[1357]: Initial clock synchronization to Sat 2026-03-14 00:12:13.729155 UTC. Mar 14 00:12:15.499103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:12:15.509076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:15.633479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:15.646866 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:15.690156 kubelet[1626]: E0314 00:12:15.690086 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:15.692988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:15.693152 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:24.316799 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:12:24.323783 systemd[1]: Started sshd@0-188.245.55.47:22-68.220.241.50:58330.service - OpenSSH per-connection server daemon (68.220.241.50:58330). Mar 14 00:12:24.910329 sshd[1635]: Accepted publickey for core from 68.220.241.50 port 58330 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:24.912633 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:24.921761 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:12:24.928667 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:12:24.932056 systemd-logind[1460]: New session 1 of user core. Mar 14 00:12:24.945216 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:12:24.953235 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:12:24.957849 (systemd)[1639]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:12:25.064799 systemd[1639]: Queued start job for default target default.target. Mar 14 00:12:25.072671 systemd[1639]: Created slice app.slice - User Application Slice. Mar 14 00:12:25.072738 systemd[1639]: Reached target paths.target - Paths. Mar 14 00:12:25.072768 systemd[1639]: Reached target timers.target - Timers. Mar 14 00:12:25.074942 systemd[1639]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:12:25.088643 systemd[1639]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:12:25.088888 systemd[1639]: Reached target sockets.target - Sockets. Mar 14 00:12:25.088922 systemd[1639]: Reached target basic.target - Basic System. Mar 14 00:12:25.089014 systemd[1639]: Reached target default.target - Main User Target. Mar 14 00:12:25.089071 systemd[1639]: Startup finished in 124ms. Mar 14 00:12:25.089686 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:12:25.099594 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:12:25.530880 systemd[1]: Started sshd@1-188.245.55.47:22-68.220.241.50:58332.service - OpenSSH per-connection server daemon (68.220.241.50:58332). Mar 14 00:12:25.930501 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:12:25.941806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:26.063461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:26.075904 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:26.111682 sshd[1650]: Accepted publickey for core from 68.220.241.50 port 58332 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:26.114051 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:26.121170 systemd-logind[1460]: New session 2 of user core. Mar 14 00:12:26.124046 kubelet[1660]: E0314 00:12:26.123990 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:26.129890 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:12:26.130344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:26.130508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:26.528346 sshd[1650]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:26.536005 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:12:26.536187 systemd[1]: sshd@1-188.245.55.47:22-68.220.241.50:58332.service: Deactivated successfully. Mar 14 00:12:26.538233 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:12:26.539402 systemd-logind[1460]: Removed session 2. Mar 14 00:12:26.641750 systemd[1]: Started sshd@2-188.245.55.47:22-68.220.241.50:58342.service - OpenSSH per-connection server daemon (68.220.241.50:58342). Mar 14 00:12:27.222420 sshd[1672]: Accepted publickey for core from 68.220.241.50 port 58342 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:27.225140 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:27.231449 systemd-logind[1460]: New session 3 of user core. Mar 14 00:12:27.237607 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:12:27.635014 sshd[1672]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:27.641084 systemd[1]: sshd@2-188.245.55.47:22-68.220.241.50:58342.service: Deactivated successfully. Mar 14 00:12:27.643322 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:12:27.645400 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:12:27.646536 systemd-logind[1460]: Removed session 3. Mar 14 00:12:27.745526 systemd[1]: Started sshd@3-188.245.55.47:22-68.220.241.50:58344.service - OpenSSH per-connection server daemon (68.220.241.50:58344). Mar 14 00:12:27.866603 update_engine[1462]: I20260314 00:12:27.866422 1462 update_attempter.cc:509] Updating boot flags... Mar 14 00:12:27.918311 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1690) Mar 14 00:12:27.996481 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1689) Mar 14 00:12:28.349232 sshd[1679]: Accepted publickey for core from 68.220.241.50 port 58344 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:28.352093 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:28.358299 systemd-logind[1460]: New session 4 of user core. Mar 14 00:12:28.374563 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:12:28.765077 sshd[1679]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:28.769967 systemd[1]: sshd@3-188.245.55.47:22-68.220.241.50:58344.service: Deactivated successfully. Mar 14 00:12:28.772470 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:12:28.773356 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:12:28.774785 systemd-logind[1460]: Removed session 4. Mar 14 00:12:28.877066 systemd[1]: Started sshd@4-188.245.55.47:22-68.220.241.50:58348.service - OpenSSH per-connection server daemon (68.220.241.50:58348). Mar 14 00:12:29.458998 sshd[1704]: Accepted publickey for core from 68.220.241.50 port 58348 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:29.461729 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:29.466917 systemd-logind[1460]: New session 5 of user core. Mar 14 00:12:29.477625 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:12:29.792350 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:12:29.792695 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:29.808808 sudo[1707]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:29.903706 sshd[1704]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:29.909584 systemd[1]: sshd@4-188.245.55.47:22-68.220.241.50:58348.service: Deactivated successfully. Mar 14 00:12:29.912507 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:12:29.913530 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:12:29.914807 systemd-logind[1460]: Removed session 5. Mar 14 00:12:30.015597 systemd[1]: Started sshd@5-188.245.55.47:22-68.220.241.50:58362.service - OpenSSH per-connection server daemon (68.220.241.50:58362). Mar 14 00:12:30.611102 sshd[1712]: Accepted publickey for core from 68.220.241.50 port 58362 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:30.612431 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:30.618514 systemd-logind[1460]: New session 6 of user core. Mar 14 00:12:30.627680 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:12:30.935391 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:12:30.936191 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:30.940885 sudo[1716]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:30.947837 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:12:30.948117 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:30.966839 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:30.969359 auditctl[1719]: No rules Mar 14 00:12:30.968949 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:12:30.969131 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:30.972928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:12:31.000959 augenrules[1737]: No rules Mar 14 00:12:31.002629 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:12:31.005753 sudo[1715]: pam_unix(sudo:session): session closed for user root Mar 14 00:12:31.099689 sshd[1712]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:31.105334 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:12:31.105655 systemd[1]: sshd@5-188.245.55.47:22-68.220.241.50:58362.service: Deactivated successfully. Mar 14 00:12:31.107746 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:12:31.110180 systemd-logind[1460]: Removed session 6. Mar 14 00:12:31.149738 systemd[1]: Started sshd@6-188.245.55.47:22-185.247.137.205:50165.service - OpenSSH per-connection server daemon (185.247.137.205:50165). Mar 14 00:12:31.205953 systemd[1]: Started sshd@7-188.245.55.47:22-68.220.241.50:58378.service - OpenSSH per-connection server daemon (68.220.241.50:58378). Mar 14 00:12:31.801441 sshd[1747]: Accepted publickey for core from 68.220.241.50 port 58378 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:31.803440 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:31.809719 systemd-logind[1460]: New session 7 of user core. Mar 14 00:12:31.818631 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:12:32.125883 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:12:32.126150 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:12:32.419774 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:12:32.421096 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:12:32.669954 dockerd[1765]: time="2026-03-14T00:12:32.669312239Z" level=info msg="Starting up" Mar 14 00:12:32.750491 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport404425641-merged.mount: Deactivated successfully. Mar 14 00:12:32.768327 systemd[1]: var-lib-docker-metacopy\x2dcheck3790336605-merged.mount: Deactivated successfully. Mar 14 00:12:32.778050 dockerd[1765]: time="2026-03-14T00:12:32.777997075Z" level=info msg="Loading containers: start." Mar 14 00:12:32.886304 kernel: Initializing XFRM netlink socket Mar 14 00:12:32.969416 systemd-networkd[1384]: docker0: Link UP Mar 14 00:12:32.983094 dockerd[1765]: time="2026-03-14T00:12:32.983019026Z" level=info msg="Loading containers: done." Mar 14 00:12:33.001189 dockerd[1765]: time="2026-03-14T00:12:33.000692104Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:12:33.001189 dockerd[1765]: time="2026-03-14T00:12:33.000963983Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:12:33.001523 dockerd[1765]: time="2026-03-14T00:12:33.001253331Z" level=info msg="Daemon has completed initialization" Mar 14 00:12:33.043046 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:12:33.043482 dockerd[1765]: time="2026-03-14T00:12:33.042824369Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:12:33.132322 sshd[1745]: Connection closed by 185.247.137.205 port 50165 Mar 14 00:12:33.133838 systemd[1]: sshd@6-188.245.55.47:22-185.247.137.205:50165.service: Deactivated successfully. Mar 14 00:12:33.167655 systemd[1]: Started sshd@8-188.245.55.47:22-185.247.137.205:34747.service - OpenSSH per-connection server daemon (185.247.137.205:34747). Mar 14 00:12:33.250612 sshd[1906]: Connection closed by 185.247.137.205 port 34747 [preauth] Mar 14 00:12:33.253437 systemd[1]: sshd@8-188.245.55.47:22-185.247.137.205:34747.service: Deactivated successfully. Mar 14 00:12:33.533589 containerd[1492]: time="2026-03-14T00:12:33.533074209Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:12:34.091614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1541522846.mount: Deactivated successfully. Mar 14 00:12:35.550978 containerd[1492]: time="2026-03-14T00:12:35.549349954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:35.550978 containerd[1492]: time="2026-03-14T00:12:35.550869879Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=24583350" Mar 14 00:12:35.551585 containerd[1492]: time="2026-03-14T00:12:35.551551695Z" level=info msg="ImageCreate event name:\"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:35.554877 containerd[1492]: time="2026-03-14T00:12:35.554840773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:35.557562 containerd[1492]: time="2026-03-14T00:12:35.557500411Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"24579851\" in 2.024373646s" Mar 14 00:12:35.557771 containerd[1492]: time="2026-03-14T00:12:35.557733153Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:3299c3f36446e899e7d38f97cdbd93a12ace0457ebca8f6d94ab33d86f9740bd\"" Mar 14 00:12:35.558839 containerd[1492]: time="2026-03-14T00:12:35.558758891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:12:36.180593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:12:36.188827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:36.307856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:36.313462 (kubelet)[1978]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:36.372717 kubelet[1978]: E0314 00:12:36.372619 1978 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:36.377047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:36.377363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:36.697823 containerd[1492]: time="2026-03-14T00:12:36.697743114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:36.699446 containerd[1492]: time="2026-03-14T00:12:36.699407165Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=19139661" Mar 14 00:12:36.701208 containerd[1492]: time="2026-03-14T00:12:36.701145808Z" level=info msg="ImageCreate event name:\"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:36.705084 containerd[1492]: time="2026-03-14T00:12:36.705022245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:36.706775 containerd[1492]: time="2026-03-14T00:12:36.706266819Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"20724045\" in 1.147253866s" Mar 14 00:12:36.706775 containerd[1492]: time="2026-03-14T00:12:36.706345444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:be20fbe989d9e759458cc8dbbc6e6c4a17e5d6f9db86b2a6cf4e3dfba0fe86e5\"" Mar 14 00:12:36.707689 containerd[1492]: time="2026-03-14T00:12:36.707666446Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:12:37.821650 containerd[1492]: time="2026-03-14T00:12:37.821111300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:37.823326 containerd[1492]: time="2026-03-14T00:12:37.823217843Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=14195564" Mar 14 00:12:37.826007 containerd[1492]: time="2026-03-14T00:12:37.825893048Z" level=info msg="ImageCreate event name:\"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:37.831325 containerd[1492]: time="2026-03-14T00:12:37.829950009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:37.831796 containerd[1492]: time="2026-03-14T00:12:37.831744862Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"15779966\" in 1.123938134s" Mar 14 00:12:37.831913 containerd[1492]: time="2026-03-14T00:12:37.831885490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:4addcfb720a81f20ddfad093c4a397bb9f3d99b798f610f0ecc83cafd7f0a3bd\"" Mar 14 00:12:37.833643 containerd[1492]: time="2026-03-14T00:12:37.833589480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:12:38.789239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2813782830.mount: Deactivated successfully. Mar 14 00:12:39.073946 containerd[1492]: time="2026-03-14T00:12:39.073587518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:39.075566 containerd[1492]: time="2026-03-14T00:12:39.075490161Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=22697114" Mar 14 00:12:39.076489 containerd[1492]: time="2026-03-14T00:12:39.076393918Z" level=info msg="ImageCreate event name:\"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:39.078702 containerd[1492]: time="2026-03-14T00:12:39.078645399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:39.079666 containerd[1492]: time="2026-03-14T00:12:39.079509761Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"22696107\" in 1.2458644s" Mar 14 00:12:39.079666 containerd[1492]: time="2026-03-14T00:12:39.079550634Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:8167398c8957d56adceac5bd6436d6ac238c546a5f5c92e450a1c380c0aa7d5d\"" Mar 14 00:12:39.080463 containerd[1492]: time="2026-03-14T00:12:39.080118818Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:12:39.585231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2364961142.mount: Deactivated successfully. Mar 14 00:12:40.621687 containerd[1492]: time="2026-03-14T00:12:40.621583953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:40.622922 containerd[1492]: time="2026-03-14T00:12:40.622847317Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395498" Mar 14 00:12:40.625333 containerd[1492]: time="2026-03-14T00:12:40.624144048Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:40.628395 containerd[1492]: time="2026-03-14T00:12:40.627877877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:40.629924 containerd[1492]: time="2026-03-14T00:12:40.629800256Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.549646916s" Mar 14 00:12:40.629924 containerd[1492]: time="2026-03-14T00:12:40.629839856Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Mar 14 00:12:40.630869 containerd[1492]: time="2026-03-14T00:12:40.630535114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:12:41.087135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount172332944.mount: Deactivated successfully. Mar 14 00:12:41.092826 containerd[1492]: time="2026-03-14T00:12:41.092755287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:41.094111 containerd[1492]: time="2026-03-14T00:12:41.094062852Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Mar 14 00:12:41.095313 containerd[1492]: time="2026-03-14T00:12:41.094838687Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:41.102439 containerd[1492]: time="2026-03-14T00:12:41.102399966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:41.103389 containerd[1492]: time="2026-03-14T00:12:41.103351845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 472.784523ms" Mar 14 00:12:41.103500 containerd[1492]: time="2026-03-14T00:12:41.103484328Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 14 00:12:41.104216 containerd[1492]: time="2026-03-14T00:12:41.104191144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:12:41.643162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125530471.mount: Deactivated successfully. Mar 14 00:12:42.330817 containerd[1492]: time="2026-03-14T00:12:42.330742305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:42.332568 containerd[1492]: time="2026-03-14T00:12:42.332514735Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=21125601" Mar 14 00:12:42.334698 containerd[1492]: time="2026-03-14T00:12:42.333891311Z" level=info msg="ImageCreate event name:\"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:42.340074 containerd[1492]: time="2026-03-14T00:12:42.340020493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:12:42.342331 containerd[1492]: time="2026-03-14T00:12:42.342268636Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"21136588\" in 1.237929741s" Mar 14 00:12:42.342455 containerd[1492]: time="2026-03-14T00:12:42.342436945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42\"" Mar 14 00:12:46.430830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 14 00:12:46.442918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:46.580453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:46.582459 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:12:46.617798 kubelet[2148]: E0314 00:12:46.617750 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:12:46.621141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:12:46.621459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:12:48.234607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:48.242788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:48.280066 systemd[1]: Reloading requested from client PID 2163 ('systemctl') (unit session-7.scope)... Mar 14 00:12:48.280086 systemd[1]: Reloading... Mar 14 00:12:48.400312 zram_generator::config[2201]: No configuration found. Mar 14 00:12:48.510504 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:48.581009 systemd[1]: Reloading finished in 300 ms. Mar 14 00:12:48.629368 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:12:48.629475 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:12:48.629875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:48.637375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:48.774337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:48.780695 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:12:48.829331 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:12:48.831313 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:12:48.831313 kubelet[2251]: I0314 00:12:48.829835 2251 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:12:49.351112 kubelet[2251]: I0314 00:12:49.351068 2251 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:12:49.351306 kubelet[2251]: I0314 00:12:49.351269 2251 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:12:49.351396 kubelet[2251]: I0314 00:12:49.351385 2251 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:12:49.351451 kubelet[2251]: I0314 00:12:49.351442 2251 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:12:49.351802 kubelet[2251]: I0314 00:12:49.351783 2251 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:12:49.361473 kubelet[2251]: E0314 00:12:49.361407 2251 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://188.245.55.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:12:49.361935 kubelet[2251]: I0314 00:12:49.361892 2251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:12:49.368202 kubelet[2251]: E0314 00:12:49.368152 2251 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:12:49.368330 kubelet[2251]: I0314 00:12:49.368238 2251 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:12:49.370801 kubelet[2251]: I0314 00:12:49.370762 2251 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:12:49.371039 kubelet[2251]: I0314 00:12:49.371014 2251 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:12:49.371198 kubelet[2251]: I0314 00:12:49.371040 2251 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8cab04691e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:12:49.371290 kubelet[2251]: I0314 00:12:49.371201 2251 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:12:49.371290 kubelet[2251]: I0314 00:12:49.371210 2251 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:12:49.371369 kubelet[2251]: I0314 00:12:49.371353 2251 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:12:49.373906 kubelet[2251]: I0314 00:12:49.373877 2251 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:12:49.375721 kubelet[2251]: I0314 00:12:49.375658 2251 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:12:49.377639 kubelet[2251]: I0314 00:12:49.375696 2251 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:12:49.377639 kubelet[2251]: I0314 00:12:49.376306 2251 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:12:49.377639 kubelet[2251]: I0314 00:12:49.376322 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:12:49.377639 kubelet[2251]: E0314 00:12:49.376523 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://188.245.55.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8cab04691e&limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:12:49.377832 kubelet[2251]: E0314 00:12:49.377669 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.55.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:12:49.378768 kubelet[2251]: I0314 00:12:49.378617 2251 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:12:49.380810 kubelet[2251]: I0314 00:12:49.380562 2251 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:12:49.380810 kubelet[2251]: I0314 00:12:49.380628 2251 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:12:49.380810 kubelet[2251]: W0314 00:12:49.380694 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:12:49.383730 kubelet[2251]: I0314 00:12:49.383698 2251 server.go:1262] "Started kubelet" Mar 14 00:12:49.386594 kubelet[2251]: I0314 00:12:49.386561 2251 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:12:49.387096 kubelet[2251]: I0314 00:12:49.387031 2251 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:12:49.387158 kubelet[2251]: I0314 00:12:49.387108 2251 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:12:49.387546 kubelet[2251]: I0314 00:12:49.387454 2251 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:12:49.387907 kubelet[2251]: I0314 00:12:49.387886 2251 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:12:49.390233 kubelet[2251]: I0314 00:12:49.390208 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:12:49.392645 kubelet[2251]: E0314 00:12:49.390422 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.55.47:6443/api/v1/namespaces/default/events\": dial tcp 188.245.55.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-8cab04691e.189c8ccdef090d0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-8cab04691e,UID:ci-4081-3-6-n-8cab04691e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8cab04691e,},FirstTimestamp:2026-03-14 00:12:49.383664908 +0000 UTC m=+0.599426953,LastTimestamp:2026-03-14 00:12:49.383664908 +0000 UTC m=+0.599426953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8cab04691e,}" Mar 14 00:12:49.393555 kubelet[2251]: I0314 00:12:49.393465 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:12:49.398374 kubelet[2251]: E0314 00:12:49.397989 2251 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:49.398374 kubelet[2251]: I0314 00:12:49.398023 2251 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:12:49.398374 kubelet[2251]: I0314 00:12:49.398199 2251 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:12:49.398374 kubelet[2251]: I0314 00:12:49.398256 2251 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:12:49.399310 kubelet[2251]: E0314 00:12:49.398734 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.55.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:12:49.399310 kubelet[2251]: E0314 00:12:49.398963 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8cab04691e?timeout=10s\": dial tcp 188.245.55.47:6443: connect: connection refused" interval="200ms" Mar 14 00:12:49.399974 kubelet[2251]: I0314 00:12:49.399944 2251 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:12:49.401324 kubelet[2251]: I0314 00:12:49.401306 2251 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:12:49.401420 kubelet[2251]: I0314 00:12:49.401411 2251 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:12:49.417572 kubelet[2251]: I0314 00:12:49.417477 2251 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:12:49.418762 kubelet[2251]: I0314 00:12:49.418720 2251 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:12:49.418762 kubelet[2251]: I0314 00:12:49.418752 2251 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:12:49.418866 kubelet[2251]: I0314 00:12:49.418781 2251 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:12:49.418866 kubelet[2251]: E0314 00:12:49.418847 2251 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:12:49.429786 kubelet[2251]: E0314 00:12:49.429725 2251 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:12:49.431062 kubelet[2251]: E0314 00:12:49.430945 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://188.245.55.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:12:49.434109 kubelet[2251]: I0314 00:12:49.433924 2251 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:12:49.434109 kubelet[2251]: I0314 00:12:49.433940 2251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:12:49.434109 kubelet[2251]: I0314 00:12:49.433956 2251 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:12:49.435857 kubelet[2251]: I0314 00:12:49.435831 2251 policy_none.go:49] "None policy: Start" Mar 14 00:12:49.435857 kubelet[2251]: I0314 00:12:49.435860 2251 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:12:49.436052 kubelet[2251]: I0314 00:12:49.435875 2251 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:12:49.437153 kubelet[2251]: I0314 00:12:49.437124 2251 policy_none.go:47] "Start" Mar 14 00:12:49.441944 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:12:49.456359 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:12:49.460184 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:12:49.472166 kubelet[2251]: E0314 00:12:49.471360 2251 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:12:49.472166 kubelet[2251]: I0314 00:12:49.471773 2251 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:12:49.472166 kubelet[2251]: I0314 00:12:49.471796 2251 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:12:49.472514 kubelet[2251]: I0314 00:12:49.472251 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:12:49.475917 kubelet[2251]: E0314 00:12:49.475882 2251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:12:49.476333 kubelet[2251]: E0314 00:12:49.476250 2251 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:49.534125 systemd[1]: Created slice kubepods-burstable-pod848a31b41504c8c149ae27a777747bd7.slice - libcontainer container kubepods-burstable-pod848a31b41504c8c149ae27a777747bd7.slice. Mar 14 00:12:49.554347 kubelet[2251]: E0314 00:12:49.554101 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.560464 systemd[1]: Created slice kubepods-burstable-podfeeefc666b823db84b456d5507fb0e6d.slice - libcontainer container kubepods-burstable-podfeeefc666b823db84b456d5507fb0e6d.slice. Mar 14 00:12:49.571150 kubelet[2251]: E0314 00:12:49.570137 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.574559 kubelet[2251]: I0314 00:12:49.574529 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.575174 kubelet[2251]: E0314 00:12:49.575147 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.55.47:6443/api/v1/nodes\": dial tcp 188.245.55.47:6443: connect: connection refused" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.575377 systemd[1]: Created slice kubepods-burstable-podae8d5d0b0e0246f7092459fe7738c92b.slice - libcontainer container kubepods-burstable-podae8d5d0b0e0246f7092459fe7738c92b.slice. Mar 14 00:12:49.577592 kubelet[2251]: E0314 00:12:49.577561 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.600624 kubelet[2251]: E0314 00:12:49.600575 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8cab04691e?timeout=10s\": dial tcp 188.245.55.47:6443: connect: connection refused" interval="400ms" Mar 14 00:12:49.600790 kubelet[2251]: I0314 00:12:49.600753 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.601060 kubelet[2251]: I0314 00:12:49.600792 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.601060 kubelet[2251]: I0314 00:12:49.600823 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.601060 kubelet[2251]: I0314 00:12:49.600876 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.601060 kubelet[2251]: I0314 00:12:49.600906 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae8d5d0b0e0246f7092459fe7738c92b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8cab04691e\" (UID: \"ae8d5d0b0e0246f7092459fe7738c92b\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.601060 kubelet[2251]: I0314 00:12:49.600924 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/848a31b41504c8c149ae27a777747bd7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" (UID: \"848a31b41504c8c149ae27a777747bd7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.602484 kubelet[2251]: I0314 00:12:49.600942 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.602484 kubelet[2251]: I0314 00:12:49.600957 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/848a31b41504c8c149ae27a777747bd7-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" (UID: \"848a31b41504c8c149ae27a777747bd7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.602484 kubelet[2251]: I0314 00:12:49.600985 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/848a31b41504c8c149ae27a777747bd7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" (UID: \"848a31b41504c8c149ae27a777747bd7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.778094 kubelet[2251]: I0314 00:12:49.778009 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.778413 kubelet[2251]: E0314 00:12:49.778384 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.55.47:6443/api/v1/nodes\": dial tcp 188.245.55.47:6443: connect: connection refused" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:49.858989 containerd[1492]: time="2026-03-14T00:12:49.858632294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8cab04691e,Uid:848a31b41504c8c149ae27a777747bd7,Namespace:kube-system,Attempt:0,}" Mar 14 00:12:49.875485 containerd[1492]: time="2026-03-14T00:12:49.874880695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8cab04691e,Uid:feeefc666b823db84b456d5507fb0e6d,Namespace:kube-system,Attempt:0,}" Mar 14 00:12:49.880729 containerd[1492]: time="2026-03-14T00:12:49.880682421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8cab04691e,Uid:ae8d5d0b0e0246f7092459fe7738c92b,Namespace:kube-system,Attempt:0,}" Mar 14 00:12:50.001726 kubelet[2251]: E0314 00:12:50.001650 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8cab04691e?timeout=10s\": dial tcp 188.245.55.47:6443: connect: connection refused" interval="800ms" Mar 14 00:12:50.183852 kubelet[2251]: I0314 00:12:50.183828 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:50.184353 kubelet[2251]: E0314 00:12:50.184321 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.55.47:6443/api/v1/nodes\": dial tcp 188.245.55.47:6443: connect: connection refused" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:50.308742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548291943.mount: Deactivated successfully. Mar 14 00:12:50.316304 containerd[1492]: time="2026-03-14T00:12:50.316228621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:12:50.318416 containerd[1492]: time="2026-03-14T00:12:50.318344228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 14 00:12:50.323311 containerd[1492]: time="2026-03-14T00:12:50.321420032Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:12:50.323510 containerd[1492]: time="2026-03-14T00:12:50.323473195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:12:50.323916 containerd[1492]: time="2026-03-14T00:12:50.323883907Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:12:50.325685 containerd[1492]: time="2026-03-14T00:12:50.325578001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:12:50.327046 containerd[1492]: time="2026-03-14T00:12:50.326993553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:12:50.328161 containerd[1492]: time="2026-03-14T00:12:50.328128963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:12:50.330598 containerd[1492]: time="2026-03-14T00:12:50.330523433Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 455.500526ms" Mar 14 00:12:50.331992 containerd[1492]: time="2026-03-14T00:12:50.331913783Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.198162ms" Mar 14 00:12:50.335762 containerd[1492]: time="2026-03-14T00:12:50.335708403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 454.941536ms" Mar 14 00:12:50.408970 kubelet[2251]: E0314 00:12:50.408914 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.55.47:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:12:50.463549 containerd[1492]: time="2026-03-14T00:12:50.462684138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:12:50.463549 containerd[1492]: time="2026-03-14T00:12:50.462747423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:12:50.463549 containerd[1492]: time="2026-03-14T00:12:50.462763344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:12:50.463549 containerd[1492]: time="2026-03-14T00:12:50.463385553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:12:50.469030 containerd[1492]: time="2026-03-14T00:12:50.468944153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:12:50.469386 containerd[1492]: time="2026-03-14T00:12:50.469344425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:12:50.470011 containerd[1492]: time="2026-03-14T00:12:50.469848145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:12:50.470914 containerd[1492]: time="2026-03-14T00:12:50.470743856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:12:50.470914 containerd[1492]: time="2026-03-14T00:12:50.470819182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:12:50.471167 containerd[1492]: time="2026-03-14T00:12:50.470530039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:12:50.471167 containerd[1492]: time="2026-03-14T00:12:50.470574842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:12:50.471167 containerd[1492]: time="2026-03-14T00:12:50.470670290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:12:50.496486 systemd[1]: Started cri-containerd-2daddd92b6060abbe7fe6ae32bcf4b717103faad320b0100623bfbaf40850e9f.scope - libcontainer container 2daddd92b6060abbe7fe6ae32bcf4b717103faad320b0100623bfbaf40850e9f. Mar 14 00:12:50.498271 systemd[1]: Started cri-containerd-efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58.scope - libcontainer container efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58. Mar 14 00:12:50.512415 systemd[1]: Started cri-containerd-cee9b384161f065e4838ef6a2d0c0638ac6070bbd5a03c769279e40210890f29.scope - libcontainer container cee9b384161f065e4838ef6a2d0c0638ac6070bbd5a03c769279e40210890f29. Mar 14 00:12:50.554307 kubelet[2251]: E0314 00:12:50.553268 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.55.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:12:50.560052 containerd[1492]: time="2026-03-14T00:12:50.560004284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-8cab04691e,Uid:feeefc666b823db84b456d5507fb0e6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58\"" Mar 14 00:12:50.564508 kubelet[2251]: E0314 00:12:50.564465 2251 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://188.245.55.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-8cab04691e&limit=500&resourceVersion=0\": dial tcp 188.245.55.47:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:12:50.569078 containerd[1492]: time="2026-03-14T00:12:50.569023878Z" level=info msg="CreateContainer within sandbox \"efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:12:50.571778 containerd[1492]: time="2026-03-14T00:12:50.571479152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-8cab04691e,Uid:ae8d5d0b0e0246f7092459fe7738c92b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2daddd92b6060abbe7fe6ae32bcf4b717103faad320b0100623bfbaf40850e9f\"" Mar 14 00:12:50.578844 containerd[1492]: time="2026-03-14T00:12:50.578633759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-8cab04691e,Uid:848a31b41504c8c149ae27a777747bd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cee9b384161f065e4838ef6a2d0c0638ac6070bbd5a03c769279e40210890f29\"" Mar 14 00:12:50.579311 containerd[1492]: time="2026-03-14T00:12:50.579013069Z" level=info msg="CreateContainer within sandbox \"2daddd92b6060abbe7fe6ae32bcf4b717103faad320b0100623bfbaf40850e9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:12:50.585090 containerd[1492]: time="2026-03-14T00:12:50.584987542Z" level=info msg="CreateContainer within sandbox \"cee9b384161f065e4838ef6a2d0c0638ac6070bbd5a03c769279e40210890f29\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:12:50.595657 containerd[1492]: time="2026-03-14T00:12:50.595606303Z" level=info msg="CreateContainer within sandbox \"efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374\"" Mar 14 00:12:50.596410 containerd[1492]: time="2026-03-14T00:12:50.596368003Z" level=info msg="StartContainer for \"0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374\"" Mar 14 00:12:50.599964 containerd[1492]: time="2026-03-14T00:12:50.599800155Z" level=info msg="CreateContainer within sandbox \"2daddd92b6060abbe7fe6ae32bcf4b717103faad320b0100623bfbaf40850e9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a\"" Mar 14 00:12:50.600661 containerd[1492]: time="2026-03-14T00:12:50.600521772Z" level=info msg="StartContainer for \"eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a\"" Mar 14 00:12:50.611936 containerd[1492]: time="2026-03-14T00:12:50.610920915Z" level=info msg="CreateContainer within sandbox \"cee9b384161f065e4838ef6a2d0c0638ac6070bbd5a03c769279e40210890f29\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b72a4838a2063801e6060eeb5d87f20bd03426b99e73cc0aa46cea96a8d599df\"" Mar 14 00:12:50.611936 containerd[1492]: time="2026-03-14T00:12:50.611439756Z" level=info msg="StartContainer for \"b72a4838a2063801e6060eeb5d87f20bd03426b99e73cc0aa46cea96a8d599df\"" Mar 14 00:12:50.632866 systemd[1]: Started cri-containerd-0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374.scope - libcontainer container 0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374. Mar 14 00:12:50.642531 systemd[1]: Started cri-containerd-eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a.scope - libcontainer container eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a. Mar 14 00:12:50.654797 systemd[1]: Started cri-containerd-b72a4838a2063801e6060eeb5d87f20bd03426b99e73cc0aa46cea96a8d599df.scope - libcontainer container b72a4838a2063801e6060eeb5d87f20bd03426b99e73cc0aa46cea96a8d599df. Mar 14 00:12:50.705027 containerd[1492]: time="2026-03-14T00:12:50.703822952Z" level=info msg="StartContainer for \"0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374\" returns successfully" Mar 14 00:12:50.722469 containerd[1492]: time="2026-03-14T00:12:50.722181485Z" level=info msg="StartContainer for \"b72a4838a2063801e6060eeb5d87f20bd03426b99e73cc0aa46cea96a8d599df\" returns successfully" Mar 14 00:12:50.732708 containerd[1492]: time="2026-03-14T00:12:50.732641673Z" level=info msg="StartContainer for \"eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a\" returns successfully" Mar 14 00:12:50.803408 kubelet[2251]: E0314 00:12:50.803360 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-8cab04691e?timeout=10s\": dial tcp 188.245.55.47:6443: connect: connection refused" interval="1.6s" Mar 14 00:12:50.987091 kubelet[2251]: I0314 00:12:50.986975 2251 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:51.442504 kubelet[2251]: E0314 00:12:51.442464 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:51.442974 kubelet[2251]: E0314 00:12:51.442938 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:51.445909 kubelet[2251]: E0314 00:12:51.445880 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:52.448943 kubelet[2251]: E0314 00:12:52.448908 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:52.449353 kubelet[2251]: E0314 00:12:52.449244 2251 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.227859 kubelet[2251]: E0314 00:12:53.227649 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-8cab04691e\" not found" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.309950 kubelet[2251]: I0314 00:12:53.309910 2251 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.309950 kubelet[2251]: E0314 00:12:53.309952 2251 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-8cab04691e\": node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:53.330346 kubelet[2251]: E0314 00:12:53.330301 2251 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:53.430765 kubelet[2251]: E0314 00:12:53.430718 2251 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:53.531659 kubelet[2251]: E0314 00:12:53.531533 2251 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:53.689477 kubelet[2251]: I0314 00:12:53.689442 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.697802 kubelet[2251]: E0314 00:12:53.697721 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.699101 kubelet[2251]: I0314 00:12:53.698825 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.707369 kubelet[2251]: E0314 00:12:53.704911 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.707369 kubelet[2251]: I0314 00:12:53.704948 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.711953 kubelet[2251]: E0314 00:12:53.711726 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.711953 kubelet[2251]: I0314 00:12:53.711769 2251 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:53.713898 kubelet[2251]: E0314 00:12:53.713857 2251 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-8cab04691e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:54.379303 kubelet[2251]: I0314 00:12:54.379247 2251 apiserver.go:52] "Watching apiserver" Mar 14 00:12:54.398733 kubelet[2251]: I0314 00:12:54.398696 2251 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:12:55.249795 systemd[1]: Reloading requested from client PID 2539 ('systemctl') (unit session-7.scope)... Mar 14 00:12:55.249811 systemd[1]: Reloading... Mar 14 00:12:55.381307 zram_generator::config[2582]: No configuration found. Mar 14 00:12:55.485137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:12:55.568494 systemd[1]: Reloading finished in 318 ms. Mar 14 00:12:55.612732 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:55.630355 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:12:55.630674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:55.630756 systemd[1]: kubelet.service: Consumed 1.019s CPU time, 119.2M memory peak, 0B memory swap peak. Mar 14 00:12:55.638694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:12:55.778126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:12:55.789988 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:12:55.839912 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:12:55.839912 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:12:55.840347 kubelet[2624]: I0314 00:12:55.839858 2624 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:12:55.853993 kubelet[2624]: I0314 00:12:55.853118 2624 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:12:55.853993 kubelet[2624]: I0314 00:12:55.853371 2624 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:12:55.853993 kubelet[2624]: I0314 00:12:55.853404 2624 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:12:55.853993 kubelet[2624]: I0314 00:12:55.853410 2624 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:12:55.853993 kubelet[2624]: I0314 00:12:55.853681 2624 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:12:55.855173 kubelet[2624]: I0314 00:12:55.855148 2624 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:12:55.858906 kubelet[2624]: I0314 00:12:55.858765 2624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:12:55.862383 kubelet[2624]: E0314 00:12:55.862257 2624 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:12:55.862383 kubelet[2624]: I0314 00:12:55.862345 2624 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:12:55.864688 kubelet[2624]: I0314 00:12:55.864651 2624 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:12:55.864891 kubelet[2624]: I0314 00:12:55.864823 2624 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:12:55.865044 kubelet[2624]: I0314 00:12:55.864852 2624 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-8cab04691e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:12:55.865044 kubelet[2624]: I0314 00:12:55.865028 2624 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:12:55.865044 kubelet[2624]: I0314 00:12:55.865038 2624 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:12:55.865245 kubelet[2624]: I0314 00:12:55.865061 2624 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:12:55.865245 kubelet[2624]: I0314 00:12:55.865235 2624 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:12:55.865430 kubelet[2624]: I0314 00:12:55.865416 2624 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:12:55.865472 kubelet[2624]: I0314 00:12:55.865432 2624 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:12:55.866361 kubelet[2624]: I0314 00:12:55.866337 2624 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:12:55.866361 kubelet[2624]: I0314 00:12:55.866361 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:12:55.873303 kubelet[2624]: I0314 00:12:55.871398 2624 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:12:55.873303 kubelet[2624]: I0314 00:12:55.873139 2624 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:12:55.873481 kubelet[2624]: I0314 00:12:55.873319 2624 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:12:55.880768 kubelet[2624]: I0314 00:12:55.880698 2624 server.go:1262] "Started kubelet" Mar 14 00:12:55.894297 kubelet[2624]: I0314 00:12:55.893463 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:12:55.906589 kubelet[2624]: I0314 00:12:55.883738 2624 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:12:55.908065 kubelet[2624]: I0314 00:12:55.906928 2624 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:12:55.908065 kubelet[2624]: I0314 00:12:55.907022 2624 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:12:55.908065 kubelet[2624]: I0314 00:12:55.907214 2624 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:12:55.908794 kubelet[2624]: I0314 00:12:55.908764 2624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:12:55.920574 kubelet[2624]: I0314 00:12:55.910965 2624 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:12:55.923039 kubelet[2624]: I0314 00:12:55.910982 2624 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:12:55.926770 kubelet[2624]: E0314 00:12:55.911112 2624 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-8cab04691e\" not found" Mar 14 00:12:55.926898 kubelet[2624]: I0314 00:12:55.918667 2624 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:12:55.927968 kubelet[2624]: I0314 00:12:55.923255 2624 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:12:55.927968 kubelet[2624]: I0314 00:12:55.923546 2624 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:12:55.927968 kubelet[2624]: I0314 00:12:55.927632 2624 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:12:55.929520 kubelet[2624]: E0314 00:12:55.929463 2624 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:12:55.932854 kubelet[2624]: I0314 00:12:55.932835 2624 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:12:55.933464 kubelet[2624]: I0314 00:12:55.933388 2624 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:12:55.940425 kubelet[2624]: I0314 00:12:55.939931 2624 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:12:55.940425 kubelet[2624]: I0314 00:12:55.939960 2624 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:12:55.940425 kubelet[2624]: I0314 00:12:55.939981 2624 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:12:55.940425 kubelet[2624]: E0314 00:12:55.940025 2624 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:12:55.977679 kubelet[2624]: I0314 00:12:55.977632 2624 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:12:55.977679 kubelet[2624]: I0314 00:12:55.977663 2624 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:12:55.977679 kubelet[2624]: I0314 00:12:55.977684 2624 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:12:55.977845 kubelet[2624]: I0314 00:12:55.977813 2624 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:12:55.977845 kubelet[2624]: I0314 00:12:55.977822 2624 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:12:55.977845 kubelet[2624]: I0314 00:12:55.977837 2624 policy_none.go:49] "None policy: Start" Mar 14 00:12:55.977845 kubelet[2624]: I0314 00:12:55.977845 2624 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:12:55.977951 kubelet[2624]: I0314 00:12:55.977852 2624 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:12:55.977978 kubelet[2624]: I0314 00:12:55.977956 2624 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:12:55.977978 kubelet[2624]: I0314 00:12:55.977965 2624 policy_none.go:47] "Start" Mar 14 00:12:55.984620 kubelet[2624]: E0314 00:12:55.984576 2624 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:12:55.985433 kubelet[2624]: I0314 00:12:55.985217 2624 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:12:55.985868 kubelet[2624]: I0314 00:12:55.985237 2624 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:12:55.986251 kubelet[2624]: I0314 00:12:55.986120 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:12:55.986926 kubelet[2624]: E0314 00:12:55.986806 2624 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:12:56.045309 kubelet[2624]: I0314 00:12:56.041965 2624 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.045309 kubelet[2624]: I0314 00:12:56.042553 2624 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.045309 kubelet[2624]: I0314 00:12:56.042843 2624 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.089866 kubelet[2624]: I0314 00:12:56.089833 2624 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.100851 kubelet[2624]: I0314 00:12:56.100747 2624 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.100851 kubelet[2624]: I0314 00:12:56.100833 2624 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128667 kubelet[2624]: I0314 00:12:56.128609 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128667 kubelet[2624]: I0314 00:12:56.128679 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128962 kubelet[2624]: I0314 00:12:56.128713 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/848a31b41504c8c149ae27a777747bd7-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" (UID: \"848a31b41504c8c149ae27a777747bd7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128962 kubelet[2624]: I0314 00:12:56.128743 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/848a31b41504c8c149ae27a777747bd7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" (UID: \"848a31b41504c8c149ae27a777747bd7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128962 kubelet[2624]: I0314 00:12:56.128774 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128962 kubelet[2624]: I0314 00:12:56.128804 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.128962 kubelet[2624]: I0314 00:12:56.128837 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/feeefc666b823db84b456d5507fb0e6d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-8cab04691e\" (UID: \"feeefc666b823db84b456d5507fb0e6d\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.129181 kubelet[2624]: I0314 00:12:56.128869 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae8d5d0b0e0246f7092459fe7738c92b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-8cab04691e\" (UID: \"ae8d5d0b0e0246f7092459fe7738c92b\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.129181 kubelet[2624]: I0314 00:12:56.128957 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/848a31b41504c8c149ae27a777747bd7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" (UID: \"848a31b41504c8c149ae27a777747bd7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.868686 kubelet[2624]: I0314 00:12:56.868183 2624 apiserver.go:52] "Watching apiserver" Mar 14 00:12:56.927240 kubelet[2624]: I0314 00:12:56.927184 2624 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:12:56.959365 kubelet[2624]: I0314 00:12:56.957445 2624 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.959715 kubelet[2624]: I0314 00:12:56.959691 2624 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.968424 kubelet[2624]: E0314 00:12:56.968393 2624 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-8cab04691e\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:56.973305 kubelet[2624]: E0314 00:12:56.970663 2624 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-8cab04691e\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" Mar 14 00:12:57.009413 kubelet[2624]: I0314 00:12:57.009334 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-8cab04691e" podStartSLOduration=1.009316142 podStartE2EDuration="1.009316142s" podCreationTimestamp="2026-03-14 00:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:12:56.993326372 +0000 UTC m=+1.198420921" watchObservedRunningTime="2026-03-14 00:12:57.009316142 +0000 UTC m=+1.214410611" Mar 14 00:12:57.022322 kubelet[2624]: I0314 00:12:57.021631 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-8cab04691e" podStartSLOduration=1.021612529 podStartE2EDuration="1.021612529s" podCreationTimestamp="2026-03-14 00:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:12:57.011661749 +0000 UTC m=+1.216756298" watchObservedRunningTime="2026-03-14 00:12:57.021612529 +0000 UTC m=+1.226707038" Mar 14 00:12:57.037297 kubelet[2624]: I0314 00:12:57.037220 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-8cab04691e" podStartSLOduration=1.037203894 podStartE2EDuration="1.037203894s" podCreationTimestamp="2026-03-14 00:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:12:57.022809233 +0000 UTC m=+1.227903742" watchObservedRunningTime="2026-03-14 00:12:57.037203894 +0000 UTC m=+1.242298403" Mar 14 00:13:01.642999 kubelet[2624]: I0314 00:13:01.642533 2624 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:13:01.644678 kubelet[2624]: I0314 00:13:01.644579 2624 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:13:01.644734 containerd[1492]: time="2026-03-14T00:13:01.644250577Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:13:02.687605 systemd[1]: Created slice kubepods-besteffort-pod2e6572b8_2f10_4347_b0d2_6f30adbb30dc.slice - libcontainer container kubepods-besteffort-pod2e6572b8_2f10_4347_b0d2_6f30adbb30dc.slice. Mar 14 00:13:02.771415 kubelet[2624]: I0314 00:13:02.771369 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r96ph\" (UniqueName: \"kubernetes.io/projected/2e6572b8-2f10-4347-b0d2-6f30adbb30dc-kube-api-access-r96ph\") pod \"kube-proxy-xvwlg\" (UID: \"2e6572b8-2f10-4347-b0d2-6f30adbb30dc\") " pod="kube-system/kube-proxy-xvwlg" Mar 14 00:13:02.771415 kubelet[2624]: I0314 00:13:02.771414 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e6572b8-2f10-4347-b0d2-6f30adbb30dc-kube-proxy\") pod \"kube-proxy-xvwlg\" (UID: \"2e6572b8-2f10-4347-b0d2-6f30adbb30dc\") " pod="kube-system/kube-proxy-xvwlg" Mar 14 00:13:02.771415 kubelet[2624]: I0314 00:13:02.771433 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e6572b8-2f10-4347-b0d2-6f30adbb30dc-xtables-lock\") pod \"kube-proxy-xvwlg\" (UID: \"2e6572b8-2f10-4347-b0d2-6f30adbb30dc\") " pod="kube-system/kube-proxy-xvwlg" Mar 14 00:13:02.771818 kubelet[2624]: I0314 00:13:02.771447 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e6572b8-2f10-4347-b0d2-6f30adbb30dc-lib-modules\") pod \"kube-proxy-xvwlg\" (UID: \"2e6572b8-2f10-4347-b0d2-6f30adbb30dc\") " pod="kube-system/kube-proxy-xvwlg" Mar 14 00:13:02.810823 systemd[1]: Created slice kubepods-besteffort-pod4d345c2b_b24d_4909_b521_d47643449273.slice - libcontainer container kubepods-besteffort-pod4d345c2b_b24d_4909_b521_d47643449273.slice. Mar 14 00:13:02.873479 kubelet[2624]: I0314 00:13:02.872382 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls84h\" (UniqueName: \"kubernetes.io/projected/4d345c2b-b24d-4909-b521-d47643449273-kube-api-access-ls84h\") pod \"tigera-operator-5588576f44-7xwlv\" (UID: \"4d345c2b-b24d-4909-b521-d47643449273\") " pod="tigera-operator/tigera-operator-5588576f44-7xwlv" Mar 14 00:13:02.873479 kubelet[2624]: I0314 00:13:02.872478 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d345c2b-b24d-4909-b521-d47643449273-var-lib-calico\") pod \"tigera-operator-5588576f44-7xwlv\" (UID: \"4d345c2b-b24d-4909-b521-d47643449273\") " pod="tigera-operator/tigera-operator-5588576f44-7xwlv" Mar 14 00:13:03.000993 containerd[1492]: time="2026-03-14T00:13:03.000504745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvwlg,Uid:2e6572b8-2f10-4347-b0d2-6f30adbb30dc,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:03.029067 containerd[1492]: time="2026-03-14T00:13:03.028637676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:03.029067 containerd[1492]: time="2026-03-14T00:13:03.028747921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:03.029067 containerd[1492]: time="2026-03-14T00:13:03.028774402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:03.029067 containerd[1492]: time="2026-03-14T00:13:03.028937688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:03.049473 systemd[1]: Started cri-containerd-316f3f9e53f0ad121060d409835a05f02c0b95dd06007bc8f7ea8a0f0d8269e8.scope - libcontainer container 316f3f9e53f0ad121060d409835a05f02c0b95dd06007bc8f7ea8a0f0d8269e8. Mar 14 00:13:03.071446 containerd[1492]: time="2026-03-14T00:13:03.071405993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xvwlg,Uid:2e6572b8-2f10-4347-b0d2-6f30adbb30dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"316f3f9e53f0ad121060d409835a05f02c0b95dd06007bc8f7ea8a0f0d8269e8\"" Mar 14 00:13:03.077331 containerd[1492]: time="2026-03-14T00:13:03.077174785Z" level=info msg="CreateContainer within sandbox \"316f3f9e53f0ad121060d409835a05f02c0b95dd06007bc8f7ea8a0f0d8269e8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:13:03.095559 containerd[1492]: time="2026-03-14T00:13:03.095493920Z" level=info msg="CreateContainer within sandbox \"316f3f9e53f0ad121060d409835a05f02c0b95dd06007bc8f7ea8a0f0d8269e8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a02ec88eb7e13facb56a08c87eae0ae4b5973de228308a4ccd87fbddb9e8ade9\"" Mar 14 00:13:03.096822 containerd[1492]: time="2026-03-14T00:13:03.096645406Z" level=info msg="StartContainer for \"a02ec88eb7e13facb56a08c87eae0ae4b5973de228308a4ccd87fbddb9e8ade9\"" Mar 14 00:13:03.117420 containerd[1492]: time="2026-03-14T00:13:03.117381439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-7xwlv,Uid:4d345c2b-b24d-4909-b521-d47643449273,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:13:03.129484 systemd[1]: Started cri-containerd-a02ec88eb7e13facb56a08c87eae0ae4b5973de228308a4ccd87fbddb9e8ade9.scope - libcontainer container a02ec88eb7e13facb56a08c87eae0ae4b5973de228308a4ccd87fbddb9e8ade9. Mar 14 00:13:03.147070 containerd[1492]: time="2026-03-14T00:13:03.146548489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:03.147070 containerd[1492]: time="2026-03-14T00:13:03.146609292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:03.147070 containerd[1492]: time="2026-03-14T00:13:03.146695495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:03.147070 containerd[1492]: time="2026-03-14T00:13:03.146960746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:03.165525 containerd[1492]: time="2026-03-14T00:13:03.165403046Z" level=info msg="StartContainer for \"a02ec88eb7e13facb56a08c87eae0ae4b5973de228308a4ccd87fbddb9e8ade9\" returns successfully" Mar 14 00:13:03.176477 systemd[1]: Started cri-containerd-083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366.scope - libcontainer container 083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366. Mar 14 00:13:03.217327 containerd[1492]: time="2026-03-14T00:13:03.217285849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-7xwlv,Uid:4d345c2b-b24d-4909-b521-d47643449273,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366\"" Mar 14 00:13:03.221161 containerd[1492]: time="2026-03-14T00:13:03.221108722Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:13:04.903077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821527127.mount: Deactivated successfully. Mar 14 00:13:06.659300 kubelet[2624]: I0314 00:13:06.658780 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xvwlg" podStartSLOduration=4.658764506 podStartE2EDuration="4.658764506s" podCreationTimestamp="2026-03-14 00:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:04.013045531 +0000 UTC m=+8.218140040" watchObservedRunningTime="2026-03-14 00:13:06.658764506 +0000 UTC m=+10.863859015" Mar 14 00:13:11.378377 containerd[1492]: time="2026-03-14T00:13:11.377438576Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:11.378787 containerd[1492]: time="2026-03-14T00:13:11.378697891Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 14 00:13:11.379600 containerd[1492]: time="2026-03-14T00:13:11.379531635Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:11.381949 containerd[1492]: time="2026-03-14T00:13:11.381901862Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:11.382703 containerd[1492]: time="2026-03-14T00:13:11.382666083Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 8.161490518s" Mar 14 00:13:11.382776 containerd[1492]: time="2026-03-14T00:13:11.382704004Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 14 00:13:11.390286 containerd[1492]: time="2026-03-14T00:13:11.390211136Z" level=info msg="CreateContainer within sandbox \"083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:13:11.407036 containerd[1492]: time="2026-03-14T00:13:11.406961847Z" level=info msg="CreateContainer within sandbox \"083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9\"" Mar 14 00:13:11.408454 containerd[1492]: time="2026-03-14T00:13:11.408029397Z" level=info msg="StartContainer for \"bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9\"" Mar 14 00:13:11.438168 systemd[1]: run-containerd-runc-k8s.io-bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9-runc.YZB82O.mount: Deactivated successfully. Mar 14 00:13:11.446484 systemd[1]: Started cri-containerd-bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9.scope - libcontainer container bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9. Mar 14 00:13:11.476813 containerd[1492]: time="2026-03-14T00:13:11.476677690Z" level=info msg="StartContainer for \"bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9\" returns successfully" Mar 14 00:13:17.657196 sudo[1750]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:17.754651 sshd[1747]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:17.760213 systemd[1]: sshd@7-188.245.55.47:22-68.220.241.50:58378.service: Deactivated successfully. Mar 14 00:13:17.766960 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:13:17.769799 systemd[1]: session-7.scope: Consumed 8.219s CPU time, 151.4M memory peak, 0B memory swap peak. Mar 14 00:13:17.770575 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:13:17.771935 systemd-logind[1460]: Removed session 7. Mar 14 00:13:21.149358 kubelet[2624]: I0314 00:13:21.149298 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-7xwlv" podStartSLOduration=10.984391349 podStartE2EDuration="19.149261626s" podCreationTimestamp="2026-03-14 00:13:02 +0000 UTC" firstStartedPulling="2026-03-14 00:13:03.219201046 +0000 UTC m=+7.424295515" lastFinishedPulling="2026-03-14 00:13:11.384071283 +0000 UTC m=+15.589165792" observedRunningTime="2026-03-14 00:13:12.012113637 +0000 UTC m=+16.217208186" watchObservedRunningTime="2026-03-14 00:13:21.149261626 +0000 UTC m=+25.354356135" Mar 14 00:13:21.161226 systemd[1]: Created slice kubepods-besteffort-podea52b25a_2798_4c9a_8c6f_5b256c256df6.slice - libcontainer container kubepods-besteffort-podea52b25a_2798_4c9a_8c6f_5b256c256df6.slice. Mar 14 00:13:21.193920 kubelet[2624]: I0314 00:13:21.193850 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ea52b25a-2798-4c9a-8c6f-5b256c256df6-typha-certs\") pod \"calico-typha-df4d69565-zsh7l\" (UID: \"ea52b25a-2798-4c9a-8c6f-5b256c256df6\") " pod="calico-system/calico-typha-df4d69565-zsh7l" Mar 14 00:13:21.193920 kubelet[2624]: I0314 00:13:21.193923 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm8ds\" (UniqueName: \"kubernetes.io/projected/ea52b25a-2798-4c9a-8c6f-5b256c256df6-kube-api-access-tm8ds\") pod \"calico-typha-df4d69565-zsh7l\" (UID: \"ea52b25a-2798-4c9a-8c6f-5b256c256df6\") " pod="calico-system/calico-typha-df4d69565-zsh7l" Mar 14 00:13:21.194145 kubelet[2624]: I0314 00:13:21.193950 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea52b25a-2798-4c9a-8c6f-5b256c256df6-tigera-ca-bundle\") pod \"calico-typha-df4d69565-zsh7l\" (UID: \"ea52b25a-2798-4c9a-8c6f-5b256c256df6\") " pod="calico-system/calico-typha-df4d69565-zsh7l" Mar 14 00:13:21.278314 systemd[1]: Created slice kubepods-besteffort-poddf8eaaa4_a428_4087_997e_d1ee87563899.slice - libcontainer container kubepods-besteffort-poddf8eaaa4_a428_4087_997e_d1ee87563899.slice. Mar 14 00:13:21.294291 kubelet[2624]: I0314 00:13:21.294190 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-cni-log-dir\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294480 kubelet[2624]: I0314 00:13:21.294309 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/df8eaaa4-a428-4087-997e-d1ee87563899-node-certs\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294480 kubelet[2624]: I0314 00:13:21.294348 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-var-lib-calico\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294480 kubelet[2624]: I0314 00:13:21.294382 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-xtables-lock\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294480 kubelet[2624]: I0314 00:13:21.294475 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-bpffs\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294759 kubelet[2624]: I0314 00:13:21.294514 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-cni-bin-dir\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294759 kubelet[2624]: I0314 00:13:21.294544 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-sys-fs\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294759 kubelet[2624]: I0314 00:13:21.294580 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-nodeproc\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294759 kubelet[2624]: I0314 00:13:21.294631 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-cni-net-dir\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294759 kubelet[2624]: I0314 00:13:21.294661 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-flexvol-driver-host\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294965 kubelet[2624]: I0314 00:13:21.294697 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df8eaaa4-a428-4087-997e-d1ee87563899-tigera-ca-bundle\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294965 kubelet[2624]: I0314 00:13:21.294729 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-lib-modules\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294965 kubelet[2624]: I0314 00:13:21.294757 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-policysync\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294965 kubelet[2624]: I0314 00:13:21.294775 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/df8eaaa4-a428-4087-997e-d1ee87563899-var-run-calico\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.294965 kubelet[2624]: I0314 00:13:21.294797 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6lqc\" (UniqueName: \"kubernetes.io/projected/df8eaaa4-a428-4087-997e-d1ee87563899-kube-api-access-d6lqc\") pod \"calico-node-4rdfd\" (UID: \"df8eaaa4-a428-4087-997e-d1ee87563899\") " pod="calico-system/calico-node-4rdfd" Mar 14 00:13:21.365917 kubelet[2624]: E0314 00:13:21.364882 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:21.396512 kubelet[2624]: I0314 00:13:21.396380 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfsw\" (UniqueName: \"kubernetes.io/projected/3c2769e1-ca6c-48f2-909e-e2592f4d7c1e-kube-api-access-xhfsw\") pod \"csi-node-driver-4k969\" (UID: \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\") " pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:21.396698 kubelet[2624]: I0314 00:13:21.396684 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c2769e1-ca6c-48f2-909e-e2592f4d7c1e-kubelet-dir\") pod \"csi-node-driver-4k969\" (UID: \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\") " pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:21.397038 kubelet[2624]: I0314 00:13:21.396975 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c2769e1-ca6c-48f2-909e-e2592f4d7c1e-socket-dir\") pod \"csi-node-driver-4k969\" (UID: \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\") " pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:21.397038 kubelet[2624]: I0314 00:13:21.396997 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c2769e1-ca6c-48f2-909e-e2592f4d7c1e-varrun\") pod \"csi-node-driver-4k969\" (UID: \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\") " pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:21.397556 kubelet[2624]: I0314 00:13:21.397378 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c2769e1-ca6c-48f2-909e-e2592f4d7c1e-registration-dir\") pod \"csi-node-driver-4k969\" (UID: \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\") " pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:21.406332 kubelet[2624]: E0314 00:13:21.405604 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.406665 kubelet[2624]: W0314 00:13:21.406598 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.406665 kubelet[2624]: E0314 00:13:21.406631 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.424348 kubelet[2624]: E0314 00:13:21.424237 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.424348 kubelet[2624]: W0314 00:13:21.424261 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.424348 kubelet[2624]: E0314 00:13:21.424311 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.468988 containerd[1492]: time="2026-03-14T00:13:21.468509564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-df4d69565-zsh7l,Uid:ea52b25a-2798-4c9a-8c6f-5b256c256df6,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:21.499556 kubelet[2624]: E0314 00:13:21.498996 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.499556 kubelet[2624]: W0314 00:13:21.499334 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.499556 kubelet[2624]: E0314 00:13:21.499356 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.500002 kubelet[2624]: E0314 00:13:21.499822 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.500002 kubelet[2624]: W0314 00:13:21.499838 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.500002 kubelet[2624]: E0314 00:13:21.499859 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.500170 kubelet[2624]: E0314 00:13:21.500157 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.501217 kubelet[2624]: W0314 00:13:21.501180 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.501485 kubelet[2624]: E0314 00:13:21.501219 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.501754 kubelet[2624]: E0314 00:13:21.501597 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.501754 kubelet[2624]: W0314 00:13:21.501612 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.501754 kubelet[2624]: E0314 00:13:21.501640 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.502036 kubelet[2624]: E0314 00:13:21.501919 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.502036 kubelet[2624]: W0314 00:13:21.501931 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.502036 kubelet[2624]: E0314 00:13:21.501943 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.502370 kubelet[2624]: E0314 00:13:21.502195 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.502370 kubelet[2624]: W0314 00:13:21.502226 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.502370 kubelet[2624]: E0314 00:13:21.502238 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.502704 kubelet[2624]: E0314 00:13:21.502561 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.502704 kubelet[2624]: W0314 00:13:21.502573 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.502704 kubelet[2624]: E0314 00:13:21.502584 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.503027 kubelet[2624]: E0314 00:13:21.502875 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.503027 kubelet[2624]: W0314 00:13:21.502887 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.503027 kubelet[2624]: E0314 00:13:21.502897 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.503194 kubelet[2624]: E0314 00:13:21.503182 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.503399 kubelet[2624]: W0314 00:13:21.503236 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.503399 kubelet[2624]: E0314 00:13:21.503251 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.503815 kubelet[2624]: E0314 00:13:21.503657 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.503815 kubelet[2624]: W0314 00:13:21.503671 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.503815 kubelet[2624]: E0314 00:13:21.503684 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.504110 kubelet[2624]: E0314 00:13:21.504003 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.504110 kubelet[2624]: W0314 00:13:21.504038 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.504110 kubelet[2624]: E0314 00:13:21.504056 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.504861 kubelet[2624]: E0314 00:13:21.504218 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.504861 kubelet[2624]: W0314 00:13:21.504512 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.504861 kubelet[2624]: E0314 00:13:21.504529 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.505484 kubelet[2624]: E0314 00:13:21.505253 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.505484 kubelet[2624]: W0314 00:13:21.505271 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.505484 kubelet[2624]: E0314 00:13:21.505330 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.506136 kubelet[2624]: E0314 00:13:21.505543 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.506136 kubelet[2624]: W0314 00:13:21.505553 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.506136 kubelet[2624]: E0314 00:13:21.505565 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.506136 kubelet[2624]: E0314 00:13:21.506044 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.506136 kubelet[2624]: W0314 00:13:21.506056 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.506136 kubelet[2624]: E0314 00:13:21.506069 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.507413 kubelet[2624]: E0314 00:13:21.506377 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.507413 kubelet[2624]: W0314 00:13:21.506388 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.507413 kubelet[2624]: E0314 00:13:21.506495 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.507413 kubelet[2624]: E0314 00:13:21.507348 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.507413 kubelet[2624]: W0314 00:13:21.507362 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.507413 kubelet[2624]: E0314 00:13:21.507376 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.507875 kubelet[2624]: E0314 00:13:21.507626 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.507875 kubelet[2624]: W0314 00:13:21.507638 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.507875 kubelet[2624]: E0314 00:13:21.507650 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.508395 containerd[1492]: time="2026-03-14T00:13:21.505678778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:21.508395 containerd[1492]: time="2026-03-14T00:13:21.505759379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:21.508395 containerd[1492]: time="2026-03-14T00:13:21.505790500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.508395 containerd[1492]: time="2026-03-14T00:13:21.505886982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.508811 kubelet[2624]: E0314 00:13:21.507907 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.508811 kubelet[2624]: W0314 00:13:21.507923 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.508811 kubelet[2624]: E0314 00:13:21.507934 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.508811 kubelet[2624]: E0314 00:13:21.508163 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.508811 kubelet[2624]: W0314 00:13:21.508172 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.508811 kubelet[2624]: E0314 00:13:21.508183 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.509642 kubelet[2624]: E0314 00:13:21.509311 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.509642 kubelet[2624]: W0314 00:13:21.509329 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.509642 kubelet[2624]: E0314 00:13:21.509343 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.509642 kubelet[2624]: E0314 00:13:21.509623 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.509642 kubelet[2624]: W0314 00:13:21.509633 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.510596 kubelet[2624]: E0314 00:13:21.509666 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.510596 kubelet[2624]: E0314 00:13:21.509836 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.510596 kubelet[2624]: W0314 00:13:21.509844 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.510596 kubelet[2624]: E0314 00:13:21.509852 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.511087 kubelet[2624]: E0314 00:13:21.510828 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.511087 kubelet[2624]: W0314 00:13:21.510873 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.511087 kubelet[2624]: E0314 00:13:21.510888 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.511087 kubelet[2624]: E0314 00:13:21.511185 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.511434 kubelet[2624]: W0314 00:13:21.511195 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.511434 kubelet[2624]: E0314 00:13:21.511241 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.522048 kubelet[2624]: E0314 00:13:21.522023 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:21.522625 kubelet[2624]: W0314 00:13:21.522214 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:21.522625 kubelet[2624]: E0314 00:13:21.522241 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:21.530557 systemd[1]: Started cri-containerd-5cdec4e06ff12324879108d567f939b415eab375541d3d722fd7bb97a2b00368.scope - libcontainer container 5cdec4e06ff12324879108d567f939b415eab375541d3d722fd7bb97a2b00368. Mar 14 00:13:21.572054 containerd[1492]: time="2026-03-14T00:13:21.571815242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-df4d69565-zsh7l,Uid:ea52b25a-2798-4c9a-8c6f-5b256c256df6,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cdec4e06ff12324879108d567f939b415eab375541d3d722fd7bb97a2b00368\"" Mar 14 00:13:21.576878 containerd[1492]: time="2026-03-14T00:13:21.575883043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:13:21.582817 containerd[1492]: time="2026-03-14T00:13:21.582776379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4rdfd,Uid:df8eaaa4-a428-4087-997e-d1ee87563899,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:21.615329 containerd[1492]: time="2026-03-14T00:13:21.615224219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:21.615611 containerd[1492]: time="2026-03-14T00:13:21.615286500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:21.615703 containerd[1492]: time="2026-03-14T00:13:21.615644587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.615800 containerd[1492]: time="2026-03-14T00:13:21.615764150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:21.635826 systemd[1]: Started cri-containerd-100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e.scope - libcontainer container 100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e. Mar 14 00:13:21.667303 containerd[1492]: time="2026-03-14T00:13:21.665081043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4rdfd,Uid:df8eaaa4-a428-4087-997e-d1ee87563899,Namespace:calico-system,Attempt:0,} returns sandbox id \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\"" Mar 14 00:13:22.940770 kubelet[2624]: E0314 00:13:22.940621 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:22.965331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305024339.mount: Deactivated successfully. Mar 14 00:13:23.925814 containerd[1492]: time="2026-03-14T00:13:23.925768181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:23.933306 containerd[1492]: time="2026-03-14T00:13:23.931717932Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:23.933306 containerd[1492]: time="2026-03-14T00:13:23.932241702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 14 00:13:23.936995 containerd[1492]: time="2026-03-14T00:13:23.936923229Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:23.938099 containerd[1492]: time="2026-03-14T00:13:23.937604441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.361536715s" Mar 14 00:13:23.938099 containerd[1492]: time="2026-03-14T00:13:23.937641082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 14 00:13:23.939339 containerd[1492]: time="2026-03-14T00:13:23.939303313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:13:23.960643 containerd[1492]: time="2026-03-14T00:13:23.960410786Z" level=info msg="CreateContainer within sandbox \"5cdec4e06ff12324879108d567f939b415eab375541d3d722fd7bb97a2b00368\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:13:23.975589 containerd[1492]: time="2026-03-14T00:13:23.975528147Z" level=info msg="CreateContainer within sandbox \"5cdec4e06ff12324879108d567f939b415eab375541d3d722fd7bb97a2b00368\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe3fbf596f716c2ae44cb7a7a5e231c0cac76a0711859ab88bfb91c8074a0db2\"" Mar 14 00:13:23.976947 containerd[1492]: time="2026-03-14T00:13:23.976917413Z" level=info msg="StartContainer for \"fe3fbf596f716c2ae44cb7a7a5e231c0cac76a0711859ab88bfb91c8074a0db2\"" Mar 14 00:13:24.011558 systemd[1]: Started cri-containerd-fe3fbf596f716c2ae44cb7a7a5e231c0cac76a0711859ab88bfb91c8074a0db2.scope - libcontainer container fe3fbf596f716c2ae44cb7a7a5e231c0cac76a0711859ab88bfb91c8074a0db2. Mar 14 00:13:24.058420 containerd[1492]: time="2026-03-14T00:13:24.058356658Z" level=info msg="StartContainer for \"fe3fbf596f716c2ae44cb7a7a5e231c0cac76a0711859ab88bfb91c8074a0db2\" returns successfully" Mar 14 00:13:24.941236 kubelet[2624]: E0314 00:13:24.940681 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:25.107330 kubelet[2624]: E0314 00:13:25.107187 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.107330 kubelet[2624]: W0314 00:13:25.107220 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.107330 kubelet[2624]: E0314 00:13:25.107269 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.107835 kubelet[2624]: E0314 00:13:25.107809 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.107913 kubelet[2624]: W0314 00:13:25.107833 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.107913 kubelet[2624]: E0314 00:13:25.107906 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.108415 kubelet[2624]: E0314 00:13:25.108342 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.108415 kubelet[2624]: W0314 00:13:25.108361 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.108415 kubelet[2624]: E0314 00:13:25.108377 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.108776 kubelet[2624]: E0314 00:13:25.108728 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.108776 kubelet[2624]: W0314 00:13:25.108775 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.108845 kubelet[2624]: E0314 00:13:25.108787 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.109089 kubelet[2624]: E0314 00:13:25.109073 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.109089 kubelet[2624]: W0314 00:13:25.109087 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.109176 kubelet[2624]: E0314 00:13:25.109098 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.109436 kubelet[2624]: E0314 00:13:25.109421 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.109569 kubelet[2624]: W0314 00:13:25.109434 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.109569 kubelet[2624]: E0314 00:13:25.109503 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.109711 kubelet[2624]: E0314 00:13:25.109698 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.109776 kubelet[2624]: W0314 00:13:25.109718 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.109776 kubelet[2624]: E0314 00:13:25.109728 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.110034 kubelet[2624]: E0314 00:13:25.109988 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.110034 kubelet[2624]: W0314 00:13:25.110005 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.110034 kubelet[2624]: E0314 00:13:25.110018 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.110247 kubelet[2624]: E0314 00:13:25.110230 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.110247 kubelet[2624]: W0314 00:13:25.110241 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.110343 kubelet[2624]: E0314 00:13:25.110251 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.110455 kubelet[2624]: E0314 00:13:25.110444 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.110455 kubelet[2624]: W0314 00:13:25.110455 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.110729 kubelet[2624]: E0314 00:13:25.110473 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.110772 kubelet[2624]: E0314 00:13:25.110756 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.110772 kubelet[2624]: W0314 00:13:25.110765 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.110835 kubelet[2624]: E0314 00:13:25.110775 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.110950 kubelet[2624]: E0314 00:13:25.110939 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.110950 kubelet[2624]: W0314 00:13:25.110949 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.111030 kubelet[2624]: E0314 00:13:25.110959 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.111159 kubelet[2624]: E0314 00:13:25.111148 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.111159 kubelet[2624]: W0314 00:13:25.111159 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.111216 kubelet[2624]: E0314 00:13:25.111168 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.111666 kubelet[2624]: E0314 00:13:25.111639 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.111666 kubelet[2624]: W0314 00:13:25.111663 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.111768 kubelet[2624]: E0314 00:13:25.111682 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.112029 kubelet[2624]: E0314 00:13:25.112011 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.112062 kubelet[2624]: W0314 00:13:25.112032 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.112083 kubelet[2624]: E0314 00:13:25.112048 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.132528 kubelet[2624]: E0314 00:13:25.132374 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.132528 kubelet[2624]: W0314 00:13:25.132396 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.132528 kubelet[2624]: E0314 00:13:25.132414 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.132872 kubelet[2624]: E0314 00:13:25.132750 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.132872 kubelet[2624]: W0314 00:13:25.132763 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.132872 kubelet[2624]: E0314 00:13:25.132774 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.133056 kubelet[2624]: E0314 00:13:25.133017 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.133056 kubelet[2624]: W0314 00:13:25.133029 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.133186 kubelet[2624]: E0314 00:13:25.133126 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.133408 kubelet[2624]: E0314 00:13:25.133392 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.133408 kubelet[2624]: W0314 00:13:25.133407 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.133561 kubelet[2624]: E0314 00:13:25.133419 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.133780 kubelet[2624]: E0314 00:13:25.133713 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.133780 kubelet[2624]: W0314 00:13:25.133724 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.133780 kubelet[2624]: E0314 00:13:25.133736 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.134289 kubelet[2624]: E0314 00:13:25.134161 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.134289 kubelet[2624]: W0314 00:13:25.134174 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.134289 kubelet[2624]: E0314 00:13:25.134195 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.134715 kubelet[2624]: E0314 00:13:25.134596 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.134715 kubelet[2624]: W0314 00:13:25.134608 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.134715 kubelet[2624]: E0314 00:13:25.134619 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.135023 kubelet[2624]: E0314 00:13:25.134952 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.135023 kubelet[2624]: W0314 00:13:25.134964 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.135023 kubelet[2624]: E0314 00:13:25.134974 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.135379 kubelet[2624]: E0314 00:13:25.135254 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.135379 kubelet[2624]: W0314 00:13:25.135264 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.135379 kubelet[2624]: E0314 00:13:25.135305 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.135926 kubelet[2624]: E0314 00:13:25.135766 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.135926 kubelet[2624]: W0314 00:13:25.135779 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.135926 kubelet[2624]: E0314 00:13:25.135790 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.136137 kubelet[2624]: E0314 00:13:25.136065 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.136201 kubelet[2624]: W0314 00:13:25.136136 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.136201 kubelet[2624]: E0314 00:13:25.136151 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.136718 kubelet[2624]: E0314 00:13:25.136692 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.136769 kubelet[2624]: W0314 00:13:25.136722 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.136769 kubelet[2624]: E0314 00:13:25.136745 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.137260 kubelet[2624]: E0314 00:13:25.137237 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.137403 kubelet[2624]: W0314 00:13:25.137382 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.137446 kubelet[2624]: E0314 00:13:25.137416 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.137990 kubelet[2624]: E0314 00:13:25.137887 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.137990 kubelet[2624]: W0314 00:13:25.137913 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.137990 kubelet[2624]: E0314 00:13:25.137934 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.138954 kubelet[2624]: E0314 00:13:25.138534 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.138954 kubelet[2624]: W0314 00:13:25.138552 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.138954 kubelet[2624]: E0314 00:13:25.138567 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.138954 kubelet[2624]: E0314 00:13:25.138770 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.138954 kubelet[2624]: W0314 00:13:25.138779 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.138954 kubelet[2624]: E0314 00:13:25.138791 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.139198 kubelet[2624]: E0314 00:13:25.139003 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.139198 kubelet[2624]: W0314 00:13:25.139014 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.139198 kubelet[2624]: E0314 00:13:25.139025 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.139446 kubelet[2624]: E0314 00:13:25.139426 2624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:13:25.139446 kubelet[2624]: W0314 00:13:25.139439 2624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:13:25.139511 kubelet[2624]: E0314 00:13:25.139452 2624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:13:25.208480 containerd[1492]: time="2026-03-14T00:13:25.208355645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:25.209844 containerd[1492]: time="2026-03-14T00:13:25.209558466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 14 00:13:25.210919 containerd[1492]: time="2026-03-14T00:13:25.210801888Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:25.215801 containerd[1492]: time="2026-03-14T00:13:25.214904840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:25.217104 containerd[1492]: time="2026-03-14T00:13:25.217040838Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.277591803s" Mar 14 00:13:25.217154 containerd[1492]: time="2026-03-14T00:13:25.217110119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 14 00:13:25.222849 containerd[1492]: time="2026-03-14T00:13:25.222820900Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:13:25.240544 containerd[1492]: time="2026-03-14T00:13:25.240499491Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337\"" Mar 14 00:13:25.241397 containerd[1492]: time="2026-03-14T00:13:25.241350826Z" level=info msg="StartContainer for \"21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337\"" Mar 14 00:13:25.282604 systemd[1]: Started cri-containerd-21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337.scope - libcontainer container 21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337. Mar 14 00:13:25.313535 containerd[1492]: time="2026-03-14T00:13:25.313425096Z" level=info msg="StartContainer for \"21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337\" returns successfully" Mar 14 00:13:25.333463 systemd[1]: cri-containerd-21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337.scope: Deactivated successfully. Mar 14 00:13:25.456971 containerd[1492]: time="2026-03-14T00:13:25.456844982Z" level=info msg="shim disconnected" id=21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337 namespace=k8s.io Mar 14 00:13:25.456971 containerd[1492]: time="2026-03-14T00:13:25.456938783Z" level=warning msg="cleaning up after shim disconnected" id=21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337 namespace=k8s.io Mar 14 00:13:25.456971 containerd[1492]: time="2026-03-14T00:13:25.456959464Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:25.949772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21d8a8726af59f91407129a10edf1670e40fd11b1f7befe01644a0ee6cf0e337-rootfs.mount: Deactivated successfully. Mar 14 00:13:26.041791 kubelet[2624]: I0314 00:13:26.041218 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:26.045201 containerd[1492]: time="2026-03-14T00:13:26.045101803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:13:26.075836 kubelet[2624]: I0314 00:13:26.075761 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-df4d69565-zsh7l" podStartSLOduration=2.711275838 podStartE2EDuration="5.075742169s" podCreationTimestamp="2026-03-14 00:13:21 +0000 UTC" firstStartedPulling="2026-03-14 00:13:21.574267851 +0000 UTC m=+25.779362360" lastFinishedPulling="2026-03-14 00:13:23.938734182 +0000 UTC m=+28.143828691" observedRunningTime="2026-03-14 00:13:25.059575944 +0000 UTC m=+29.264670453" watchObservedRunningTime="2026-03-14 00:13:26.075742169 +0000 UTC m=+30.280836758" Mar 14 00:13:26.942550 kubelet[2624]: E0314 00:13:26.941051 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:28.940738 kubelet[2624]: E0314 00:13:28.940622 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:29.992956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916386673.mount: Deactivated successfully. Mar 14 00:13:30.020427 containerd[1492]: time="2026-03-14T00:13:30.020314064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:30.021928 containerd[1492]: time="2026-03-14T00:13:30.021869249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 14 00:13:30.023345 containerd[1492]: time="2026-03-14T00:13:30.023298871Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:30.026962 containerd[1492]: time="2026-03-14T00:13:30.026732085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:30.028575 containerd[1492]: time="2026-03-14T00:13:30.028341590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 3.983202347s" Mar 14 00:13:30.028575 containerd[1492]: time="2026-03-14T00:13:30.028374910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 14 00:13:30.033844 containerd[1492]: time="2026-03-14T00:13:30.033801075Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:13:30.050091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2103432898.mount: Deactivated successfully. Mar 14 00:13:30.050390 containerd[1492]: time="2026-03-14T00:13:30.050344054Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de\"" Mar 14 00:13:30.053329 containerd[1492]: time="2026-03-14T00:13:30.052454487Z" level=info msg="StartContainer for \"fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de\"" Mar 14 00:13:30.089529 systemd[1]: Started cri-containerd-fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de.scope - libcontainer container fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de. Mar 14 00:13:30.121468 containerd[1492]: time="2026-03-14T00:13:30.121412085Z" level=info msg="StartContainer for \"fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de\" returns successfully" Mar 14 00:13:30.225366 systemd[1]: cri-containerd-fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de.scope: Deactivated successfully. Mar 14 00:13:30.389449 containerd[1492]: time="2026-03-14T00:13:30.389191071Z" level=info msg="shim disconnected" id=fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de namespace=k8s.io Mar 14 00:13:30.389449 containerd[1492]: time="2026-03-14T00:13:30.389310713Z" level=warning msg="cleaning up after shim disconnected" id=fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de namespace=k8s.io Mar 14 00:13:30.389449 containerd[1492]: time="2026-03-14T00:13:30.389334633Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:30.941388 kubelet[2624]: E0314 00:13:30.941224 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:30.996508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb91f1a518890a526b9cf7515d788fb1681d8bd68ec3fa0379f4bb710a8153de-rootfs.mount: Deactivated successfully. Mar 14 00:13:31.065026 containerd[1492]: time="2026-03-14T00:13:31.064713690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:13:32.940746 kubelet[2624]: E0314 00:13:32.940697 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:33.076496 kubelet[2624]: I0314 00:13:33.076061 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:33.298761 containerd[1492]: time="2026-03-14T00:13:33.298606285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:33.300668 containerd[1492]: time="2026-03-14T00:13:33.300578154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 14 00:13:33.301728 containerd[1492]: time="2026-03-14T00:13:33.301688850Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:33.304094 containerd[1492]: time="2026-03-14T00:13:33.304003884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:33.304907 containerd[1492]: time="2026-03-14T00:13:33.304876977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.240123767s" Mar 14 00:13:33.305081 containerd[1492]: time="2026-03-14T00:13:33.304998699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 14 00:13:33.309990 containerd[1492]: time="2026-03-14T00:13:33.309963212Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:13:33.331324 containerd[1492]: time="2026-03-14T00:13:33.331074042Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f\"" Mar 14 00:13:33.336293 containerd[1492]: time="2026-03-14T00:13:33.336200358Z" level=info msg="StartContainer for \"bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f\"" Mar 14 00:13:33.372498 systemd[1]: Started cri-containerd-bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f.scope - libcontainer container bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f. Mar 14 00:13:33.410243 containerd[1492]: time="2026-03-14T00:13:33.410075965Z" level=info msg="StartContainer for \"bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f\" returns successfully" Mar 14 00:13:33.988873 systemd[1]: cri-containerd-bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f.scope: Deactivated successfully. Mar 14 00:13:33.995653 kubelet[2624]: I0314 00:13:33.995602 2624 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:13:34.019725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f-rootfs.mount: Deactivated successfully. Mar 14 00:13:34.110554 systemd[1]: Created slice kubepods-besteffort-pod088db528_527b_4c1c_aa0e_fb534a4b3d53.slice - libcontainer container kubepods-besteffort-pod088db528_527b_4c1c_aa0e_fb534a4b3d53.slice. Mar 14 00:13:34.117827 containerd[1492]: time="2026-03-14T00:13:34.117033698Z" level=info msg="shim disconnected" id=bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f namespace=k8s.io Mar 14 00:13:34.117827 containerd[1492]: time="2026-03-14T00:13:34.117093419Z" level=warning msg="cleaning up after shim disconnected" id=bb9a8928a15efd5d8ddeb226166684e23a61e5532a60925399738d803862f16f namespace=k8s.io Mar 14 00:13:34.117827 containerd[1492]: time="2026-03-14T00:13:34.117106499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:13:34.126710 systemd[1]: Created slice kubepods-burstable-pod176d1ac9_bc75_42c6_9936_a88fc33155e1.slice - libcontainer container kubepods-burstable-pod176d1ac9_bc75_42c6_9936_a88fc33155e1.slice. Mar 14 00:13:34.142426 systemd[1]: Created slice kubepods-burstable-pod5ccf3f92_1893_45dd_8984_7c1c3523f0d0.slice - libcontainer container kubepods-burstable-pod5ccf3f92_1893_45dd_8984_7c1c3523f0d0.slice. Mar 14 00:13:34.157328 systemd[1]: Created slice kubepods-besteffort-pod54468044_a1de_4bd2_ad46_1b29248bc3b5.slice - libcontainer container kubepods-besteffort-pod54468044_a1de_4bd2_ad46_1b29248bc3b5.slice. Mar 14 00:13:34.168684 systemd[1]: Created slice kubepods-besteffort-podd6ead6b1_357d_411f_8456_c605fe68bb57.slice - libcontainer container kubepods-besteffort-podd6ead6b1_357d_411f_8456_c605fe68bb57.slice. Mar 14 00:13:34.178968 systemd[1]: Created slice kubepods-besteffort-podad64364e_d94a_400e_a2c3_7d753a27a0d8.slice - libcontainer container kubepods-besteffort-podad64364e_d94a_400e_a2c3_7d753a27a0d8.slice. Mar 14 00:13:34.188073 systemd[1]: Created slice kubepods-besteffort-podcf9c5ce0_11b8_40fd_9752_8b6c4229fbea.slice - libcontainer container kubepods-besteffort-podcf9c5ce0_11b8_40fd_9752_8b6c4229fbea.slice. Mar 14 00:13:34.200085 kubelet[2624]: I0314 00:13:34.199938 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ws9d\" (UniqueName: \"kubernetes.io/projected/cf9c5ce0-11b8-40fd-9752-8b6c4229fbea-kube-api-access-5ws9d\") pod \"goldmane-cccfbd5cf-pr7sg\" (UID: \"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea\") " pod="calico-system/goldmane-cccfbd5cf-pr7sg" Mar 14 00:13:34.201698 kubelet[2624]: I0314 00:13:34.201666 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlscf\" (UniqueName: \"kubernetes.io/projected/5ccf3f92-1893-45dd-8984-7c1c3523f0d0-kube-api-access-qlscf\") pod \"coredns-66bc5c9577-d24ss\" (UID: \"5ccf3f92-1893-45dd-8984-7c1c3523f0d0\") " pod="kube-system/coredns-66bc5c9577-d24ss" Mar 14 00:13:34.201953 kubelet[2624]: I0314 00:13:34.201881 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54468044-a1de-4bd2-ad46-1b29248bc3b5-calico-apiserver-certs\") pod \"calico-apiserver-7458dd48bf-wjkkn\" (UID: \"54468044-a1de-4bd2-ad46-1b29248bc3b5\") " pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" Mar 14 00:13:34.202078 kubelet[2624]: I0314 00:13:34.202059 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfk4\" (UniqueName: \"kubernetes.io/projected/d6ead6b1-357d-411f-8456-c605fe68bb57-kube-api-access-4nfk4\") pod \"calico-apiserver-7458dd48bf-crltd\" (UID: \"d6ead6b1-357d-411f-8456-c605fe68bb57\") " pod="calico-system/calico-apiserver-7458dd48bf-crltd" Mar 14 00:13:34.203092 kubelet[2624]: I0314 00:13:34.203010 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad64364e-d94a-400e-a2c3-7d753a27a0d8-tigera-ca-bundle\") pod \"calico-kube-controllers-77bdccb5d5-c59xx\" (UID: \"ad64364e-d94a-400e-a2c3-7d753a27a0d8\") " pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" Mar 14 00:13:34.204561 kubelet[2624]: I0314 00:13:34.203619 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg9df\" (UniqueName: \"kubernetes.io/projected/088db528-527b-4c1c-aa0e-fb534a4b3d53-kube-api-access-mg9df\") pod \"whisker-d7568446c-55d6n\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " pod="calico-system/whisker-d7568446c-55d6n" Mar 14 00:13:34.204561 kubelet[2624]: I0314 00:13:34.203656 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf9c5ce0-11b8-40fd-9752-8b6c4229fbea-config\") pod \"goldmane-cccfbd5cf-pr7sg\" (UID: \"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea\") " pod="calico-system/goldmane-cccfbd5cf-pr7sg" Mar 14 00:13:34.204561 kubelet[2624]: I0314 00:13:34.203692 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ccf3f92-1893-45dd-8984-7c1c3523f0d0-config-volume\") pod \"coredns-66bc5c9577-d24ss\" (UID: \"5ccf3f92-1893-45dd-8984-7c1c3523f0d0\") " pod="kube-system/coredns-66bc5c9577-d24ss" Mar 14 00:13:34.204561 kubelet[2624]: I0314 00:13:34.203710 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6ead6b1-357d-411f-8456-c605fe68bb57-calico-apiserver-certs\") pod \"calico-apiserver-7458dd48bf-crltd\" (UID: \"d6ead6b1-357d-411f-8456-c605fe68bb57\") " pod="calico-system/calico-apiserver-7458dd48bf-crltd" Mar 14 00:13:34.204561 kubelet[2624]: I0314 00:13:34.203726 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-backend-key-pair\") pod \"whisker-d7568446c-55d6n\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " pod="calico-system/whisker-d7568446c-55d6n" Mar 14 00:13:34.204726 kubelet[2624]: I0314 00:13:34.203762 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/176d1ac9-bc75-42c6-9936-a88fc33155e1-config-volume\") pod \"coredns-66bc5c9577-w4qz7\" (UID: \"176d1ac9-bc75-42c6-9936-a88fc33155e1\") " pod="kube-system/coredns-66bc5c9577-w4qz7" Mar 14 00:13:34.204726 kubelet[2624]: I0314 00:13:34.203777 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9md4\" (UniqueName: \"kubernetes.io/projected/ad64364e-d94a-400e-a2c3-7d753a27a0d8-kube-api-access-h9md4\") pod \"calico-kube-controllers-77bdccb5d5-c59xx\" (UID: \"ad64364e-d94a-400e-a2c3-7d753a27a0d8\") " pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" Mar 14 00:13:34.204726 kubelet[2624]: I0314 00:13:34.203810 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gllwh\" (UniqueName: \"kubernetes.io/projected/176d1ac9-bc75-42c6-9936-a88fc33155e1-kube-api-access-gllwh\") pod \"coredns-66bc5c9577-w4qz7\" (UID: \"176d1ac9-bc75-42c6-9936-a88fc33155e1\") " pod="kube-system/coredns-66bc5c9577-w4qz7" Mar 14 00:13:34.204726 kubelet[2624]: I0314 00:13:34.203828 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdlcv\" (UniqueName: \"kubernetes.io/projected/54468044-a1de-4bd2-ad46-1b29248bc3b5-kube-api-access-gdlcv\") pod \"calico-apiserver-7458dd48bf-wjkkn\" (UID: \"54468044-a1de-4bd2-ad46-1b29248bc3b5\") " pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" Mar 14 00:13:34.204726 kubelet[2624]: I0314 00:13:34.203848 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf9c5ce0-11b8-40fd-9752-8b6c4229fbea-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-pr7sg\" (UID: \"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea\") " pod="calico-system/goldmane-cccfbd5cf-pr7sg" Mar 14 00:13:34.204831 kubelet[2624]: I0314 00:13:34.203871 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-ca-bundle\") pod \"whisker-d7568446c-55d6n\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " pod="calico-system/whisker-d7568446c-55d6n" Mar 14 00:13:34.204831 kubelet[2624]: I0314 00:13:34.203912 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-nginx-config\") pod \"whisker-d7568446c-55d6n\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " pod="calico-system/whisker-d7568446c-55d6n" Mar 14 00:13:34.204831 kubelet[2624]: I0314 00:13:34.203941 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cf9c5ce0-11b8-40fd-9752-8b6c4229fbea-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-pr7sg\" (UID: \"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea\") " pod="calico-system/goldmane-cccfbd5cf-pr7sg" Mar 14 00:13:34.421404 containerd[1492]: time="2026-03-14T00:13:34.420071637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d7568446c-55d6n,Uid:088db528-527b-4c1c-aa0e-fb534a4b3d53,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.446633 containerd[1492]: time="2026-03-14T00:13:34.446222895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w4qz7,Uid:176d1ac9-bc75-42c6-9936-a88fc33155e1,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:34.454227 containerd[1492]: time="2026-03-14T00:13:34.454177210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d24ss,Uid:5ccf3f92-1893-45dd-8984-7c1c3523f0d0,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:34.465228 containerd[1492]: time="2026-03-14T00:13:34.465120008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-wjkkn,Uid:54468044-a1de-4bd2-ad46-1b29248bc3b5,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.479583 containerd[1492]: time="2026-03-14T00:13:34.479518056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-crltd,Uid:d6ead6b1-357d-411f-8456-c605fe68bb57,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.484012 containerd[1492]: time="2026-03-14T00:13:34.483969360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bdccb5d5-c59xx,Uid:ad64364e-d94a-400e-a2c3-7d753a27a0d8,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.496699 containerd[1492]: time="2026-03-14T00:13:34.496632983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pr7sg,Uid:cf9c5ce0-11b8-40fd-9752-8b6c4229fbea,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:34.566364 containerd[1492]: time="2026-03-14T00:13:34.566216909Z" level=error msg="Failed to destroy network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.573749 containerd[1492]: time="2026-03-14T00:13:34.573696777Z" level=error msg="encountered an error cleaning up failed sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.574388 containerd[1492]: time="2026-03-14T00:13:34.573935100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-d7568446c-55d6n,Uid:088db528-527b-4c1c-aa0e-fb534a4b3d53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.574474 kubelet[2624]: E0314 00:13:34.574134 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.574474 kubelet[2624]: E0314 00:13:34.574202 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d7568446c-55d6n" Mar 14 00:13:34.574474 kubelet[2624]: E0314 00:13:34.574220 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-d7568446c-55d6n" Mar 14 00:13:34.574711 kubelet[2624]: E0314 00:13:34.574341 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-d7568446c-55d6n_calico-system(088db528-527b-4c1c-aa0e-fb534a4b3d53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-d7568446c-55d6n_calico-system(088db528-527b-4c1c-aa0e-fb534a4b3d53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d7568446c-55d6n" podUID="088db528-527b-4c1c-aa0e-fb534a4b3d53" Mar 14 00:13:34.591060 containerd[1492]: time="2026-03-14T00:13:34.590977666Z" level=error msg="Failed to destroy network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.592706 containerd[1492]: time="2026-03-14T00:13:34.592499808Z" level=error msg="encountered an error cleaning up failed sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.594293 containerd[1492]: time="2026-03-14T00:13:34.592684131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w4qz7,Uid:176d1ac9-bc75-42c6-9936-a88fc33155e1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.594866 kubelet[2624]: E0314 00:13:34.594766 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.594985 kubelet[2624]: E0314 00:13:34.594903 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w4qz7" Mar 14 00:13:34.594985 kubelet[2624]: E0314 00:13:34.594925 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-w4qz7" Mar 14 00:13:34.595675 kubelet[2624]: E0314 00:13:34.595103 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-w4qz7_kube-system(176d1ac9-bc75-42c6-9936-a88fc33155e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-w4qz7_kube-system(176d1ac9-bc75-42c6-9936-a88fc33155e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w4qz7" podUID="176d1ac9-bc75-42c6-9936-a88fc33155e1" Mar 14 00:13:34.670930 containerd[1492]: time="2026-03-14T00:13:34.670881221Z" level=error msg="Failed to destroy network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.671517 containerd[1492]: time="2026-03-14T00:13:34.671441829Z" level=error msg="encountered an error cleaning up failed sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.673043 containerd[1492]: time="2026-03-14T00:13:34.673008132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pr7sg,Uid:cf9c5ce0-11b8-40fd-9752-8b6c4229fbea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.673846 kubelet[2624]: E0314 00:13:34.673402 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.673846 kubelet[2624]: E0314 00:13:34.673469 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-pr7sg" Mar 14 00:13:34.673846 kubelet[2624]: E0314 00:13:34.673488 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-pr7sg" Mar 14 00:13:34.674717 kubelet[2624]: E0314 00:13:34.673579 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-pr7sg_calico-system(cf9c5ce0-11b8-40fd-9752-8b6c4229fbea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-pr7sg_calico-system(cf9c5ce0-11b8-40fd-9752-8b6c4229fbea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-pr7sg" podUID="cf9c5ce0-11b8-40fd-9752-8b6c4229fbea" Mar 14 00:13:34.697432 containerd[1492]: time="2026-03-14T00:13:34.697385484Z" level=error msg="Failed to destroy network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.698127 containerd[1492]: time="2026-03-14T00:13:34.698091134Z" level=error msg="encountered an error cleaning up failed sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.698255 containerd[1492]: time="2026-03-14T00:13:34.698233896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d24ss,Uid:5ccf3f92-1893-45dd-8984-7c1c3523f0d0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.698958 kubelet[2624]: E0314 00:13:34.698577 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.698958 kubelet[2624]: E0314 00:13:34.698631 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d24ss" Mar 14 00:13:34.698958 kubelet[2624]: E0314 00:13:34.698650 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-d24ss" Mar 14 00:13:34.699101 kubelet[2624]: E0314 00:13:34.698704 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-d24ss_kube-system(5ccf3f92-1893-45dd-8984-7c1c3523f0d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-d24ss_kube-system(5ccf3f92-1893-45dd-8984-7c1c3523f0d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-d24ss" podUID="5ccf3f92-1893-45dd-8984-7c1c3523f0d0" Mar 14 00:13:34.702493 containerd[1492]: time="2026-03-14T00:13:34.702451597Z" level=error msg="Failed to destroy network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.703110 containerd[1492]: time="2026-03-14T00:13:34.702807162Z" level=error msg="encountered an error cleaning up failed sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.703110 containerd[1492]: time="2026-03-14T00:13:34.702874123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-crltd,Uid:d6ead6b1-357d-411f-8456-c605fe68bb57,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.703328 kubelet[2624]: E0314 00:13:34.703205 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.703328 kubelet[2624]: E0314 00:13:34.703254 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7458dd48bf-crltd" Mar 14 00:13:34.703328 kubelet[2624]: E0314 00:13:34.703297 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7458dd48bf-crltd" Mar 14 00:13:34.703958 kubelet[2624]: E0314 00:13:34.703355 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7458dd48bf-crltd_calico-system(d6ead6b1-357d-411f-8456-c605fe68bb57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7458dd48bf-crltd_calico-system(d6ead6b1-357d-411f-8456-c605fe68bb57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7458dd48bf-crltd" podUID="d6ead6b1-357d-411f-8456-c605fe68bb57" Mar 14 00:13:34.712766 containerd[1492]: time="2026-03-14T00:13:34.712712865Z" level=error msg="Failed to destroy network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.713100 containerd[1492]: time="2026-03-14T00:13:34.713072831Z" level=error msg="encountered an error cleaning up failed sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.713167 containerd[1492]: time="2026-03-14T00:13:34.713141472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bdccb5d5-c59xx,Uid:ad64364e-d94a-400e-a2c3-7d753a27a0d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.714064 kubelet[2624]: E0314 00:13:34.713640 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.714064 kubelet[2624]: E0314 00:13:34.713714 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" Mar 14 00:13:34.714064 kubelet[2624]: E0314 00:13:34.713734 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" Mar 14 00:13:34.714213 kubelet[2624]: E0314 00:13:34.713798 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77bdccb5d5-c59xx_calico-system(ad64364e-d94a-400e-a2c3-7d753a27a0d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77bdccb5d5-c59xx_calico-system(ad64364e-d94a-400e-a2c3-7d753a27a0d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" podUID="ad64364e-d94a-400e-a2c3-7d753a27a0d8" Mar 14 00:13:34.723939 containerd[1492]: time="2026-03-14T00:13:34.723843986Z" level=error msg="Failed to destroy network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.724347 containerd[1492]: time="2026-03-14T00:13:34.724301753Z" level=error msg="encountered an error cleaning up failed sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.724401 containerd[1492]: time="2026-03-14T00:13:34.724376314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-wjkkn,Uid:54468044-a1de-4bd2-ad46-1b29248bc3b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.725109 kubelet[2624]: E0314 00:13:34.724691 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:34.725109 kubelet[2624]: E0314 00:13:34.724751 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" Mar 14 00:13:34.725109 kubelet[2624]: E0314 00:13:34.724778 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" Mar 14 00:13:34.726638 kubelet[2624]: E0314 00:13:34.724862 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7458dd48bf-wjkkn_calico-system(54468044-a1de-4bd2-ad46-1b29248bc3b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7458dd48bf-wjkkn_calico-system(54468044-a1de-4bd2-ad46-1b29248bc3b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" podUID="54468044-a1de-4bd2-ad46-1b29248bc3b5" Mar 14 00:13:34.948330 systemd[1]: Created slice kubepods-besteffort-pod3c2769e1_ca6c_48f2_909e_e2592f4d7c1e.slice - libcontainer container kubepods-besteffort-pod3c2769e1_ca6c_48f2_909e_e2592f4d7c1e.slice. Mar 14 00:13:34.953117 containerd[1492]: time="2026-03-14T00:13:34.953065618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k969,Uid:3c2769e1-ca6c-48f2-909e-e2592f4d7c1e,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:35.017158 containerd[1492]: time="2026-03-14T00:13:35.016997218Z" level=error msg="Failed to destroy network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.017868 containerd[1492]: time="2026-03-14T00:13:35.017721948Z" level=error msg="encountered an error cleaning up failed sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.017868 containerd[1492]: time="2026-03-14T00:13:35.017806150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k969,Uid:3c2769e1-ca6c-48f2-909e-e2592f4d7c1e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.018284 kubelet[2624]: E0314 00:13:35.018208 2624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.018771 kubelet[2624]: E0314 00:13:35.018315 2624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:35.018771 kubelet[2624]: E0314 00:13:35.018339 2624 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4k969" Mar 14 00:13:35.018771 kubelet[2624]: E0314 00:13:35.018398 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4k969_calico-system(3c2769e1-ca6c-48f2-909e-e2592f4d7c1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4k969_calico-system(3c2769e1-ca6c-48f2-909e-e2592f4d7c1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:35.080578 kubelet[2624]: I0314 00:13:35.080419 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:35.081218 containerd[1492]: time="2026-03-14T00:13:35.081173169Z" level=info msg="StopPodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\"" Mar 14 00:13:35.081512 containerd[1492]: time="2026-03-14T00:13:35.081391452Z" level=info msg="Ensure that sandbox 2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949 in task-service has been cleanup successfully" Mar 14 00:13:35.083655 kubelet[2624]: I0314 00:13:35.083626 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:35.084895 containerd[1492]: time="2026-03-14T00:13:35.084516377Z" level=info msg="StopPodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\"" Mar 14 00:13:35.084895 containerd[1492]: time="2026-03-14T00:13:35.084707899Z" level=info msg="Ensure that sandbox ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411 in task-service has been cleanup successfully" Mar 14 00:13:35.086016 kubelet[2624]: I0314 00:13:35.085985 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:35.087957 containerd[1492]: time="2026-03-14T00:13:35.087793823Z" level=info msg="StopPodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\"" Mar 14 00:13:35.088654 containerd[1492]: time="2026-03-14T00:13:35.088582634Z" level=info msg="Ensure that sandbox 66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d in task-service has been cleanup successfully" Mar 14 00:13:35.091202 kubelet[2624]: I0314 00:13:35.091029 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:35.093138 containerd[1492]: time="2026-03-14T00:13:35.093034178Z" level=info msg="StopPodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\"" Mar 14 00:13:35.093356 containerd[1492]: time="2026-03-14T00:13:35.093309302Z" level=info msg="Ensure that sandbox 431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c in task-service has been cleanup successfully" Mar 14 00:13:35.096673 kubelet[2624]: I0314 00:13:35.096075 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:35.106940 containerd[1492]: time="2026-03-14T00:13:35.106889254Z" level=info msg="StopPodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\"" Mar 14 00:13:35.108826 containerd[1492]: time="2026-03-14T00:13:35.108051351Z" level=info msg="Ensure that sandbox 8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43 in task-service has been cleanup successfully" Mar 14 00:13:35.129006 kubelet[2624]: I0314 00:13:35.128569 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:35.132509 containerd[1492]: time="2026-03-14T00:13:35.132454857Z" level=info msg="StopPodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\"" Mar 14 00:13:35.132889 containerd[1492]: time="2026-03-14T00:13:35.132864863Z" level=info msg="Ensure that sandbox 47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865 in task-service has been cleanup successfully" Mar 14 00:13:35.147924 kubelet[2624]: I0314 00:13:35.147882 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:35.150488 containerd[1492]: time="2026-03-14T00:13:35.150450393Z" level=info msg="StopPodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\"" Mar 14 00:13:35.151155 containerd[1492]: time="2026-03-14T00:13:35.150923440Z" level=info msg="Ensure that sandbox f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7 in task-service has been cleanup successfully" Mar 14 00:13:35.162057 containerd[1492]: time="2026-03-14T00:13:35.160646458Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:13:35.168103 kubelet[2624]: I0314 00:13:35.168074 2624 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:35.174080 containerd[1492]: time="2026-03-14T00:13:35.174036728Z" level=info msg="StopPodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\"" Mar 14 00:13:35.175580 containerd[1492]: time="2026-03-14T00:13:35.175479548Z" level=info msg="Ensure that sandbox 5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb in task-service has been cleanup successfully" Mar 14 00:13:35.212597 containerd[1492]: time="2026-03-14T00:13:35.211238216Z" level=error msg="StopPodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" failed" error="failed to destroy network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.213997 kubelet[2624]: E0314 00:13:35.213423 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:35.218291 kubelet[2624]: E0314 00:13:35.216993 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949"} Mar 14 00:13:35.219849 kubelet[2624]: E0314 00:13:35.219773 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.219849 kubelet[2624]: E0314 00:13:35.219814 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4k969" podUID="3c2769e1-ca6c-48f2-909e-e2592f4d7c1e" Mar 14 00:13:35.227720 containerd[1492]: time="2026-03-14T00:13:35.227593848Z" level=info msg="CreateContainer within sandbox \"100bd9538d292235a94ab1999792231101fe5797bfe6ccee079eb8b2fed1783e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a0e6d40d3dbaf34d9c91968ce5cb76ee72d3e13a5ce8d4b7b8709c7788cfc1c\"" Mar 14 00:13:35.230420 containerd[1492]: time="2026-03-14T00:13:35.229949962Z" level=info msg="StartContainer for \"4a0e6d40d3dbaf34d9c91968ce5cb76ee72d3e13a5ce8d4b7b8709c7788cfc1c\"" Mar 14 00:13:35.256668 containerd[1492]: time="2026-03-14T00:13:35.256610900Z" level=error msg="StopPodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" failed" error="failed to destroy network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.256950 kubelet[2624]: E0314 00:13:35.256916 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:35.257248 kubelet[2624]: E0314 00:13:35.257145 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43"} Mar 14 00:13:35.257248 kubelet[2624]: E0314 00:13:35.257193 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ccf3f92-1893-45dd-8984-7c1c3523f0d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.257248 kubelet[2624]: E0314 00:13:35.257219 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ccf3f92-1893-45dd-8984-7c1c3523f0d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-d24ss" podUID="5ccf3f92-1893-45dd-8984-7c1c3523f0d0" Mar 14 00:13:35.265624 containerd[1492]: time="2026-03-14T00:13:35.265566267Z" level=error msg="StopPodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" failed" error="failed to destroy network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.266057 kubelet[2624]: E0314 00:13:35.266015 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:35.266221 kubelet[2624]: E0314 00:13:35.266202 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411"} Mar 14 00:13:35.266368 kubelet[2624]: E0314 00:13:35.266341 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.266591 kubelet[2624]: E0314 00:13:35.266505 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-pr7sg" podUID="cf9c5ce0-11b8-40fd-9752-8b6c4229fbea" Mar 14 00:13:35.274736 containerd[1492]: time="2026-03-14T00:13:35.274682957Z" level=error msg="StopPodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" failed" error="failed to destroy network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.275817 kubelet[2624]: E0314 00:13:35.275679 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:35.275817 kubelet[2624]: E0314 00:13:35.275726 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb"} Mar 14 00:13:35.275817 kubelet[2624]: E0314 00:13:35.275754 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"54468044-a1de-4bd2-ad46-1b29248bc3b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.275817 kubelet[2624]: E0314 00:13:35.275779 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"54468044-a1de-4bd2-ad46-1b29248bc3b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" podUID="54468044-a1de-4bd2-ad46-1b29248bc3b5" Mar 14 00:13:35.276470 containerd[1492]: time="2026-03-14T00:13:35.276416741Z" level=error msg="StopPodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" failed" error="failed to destroy network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.276594 containerd[1492]: time="2026-03-14T00:13:35.276417701Z" level=error msg="StopPodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" failed" error="failed to destroy network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.276884 kubelet[2624]: E0314 00:13:35.276759 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:35.276884 kubelet[2624]: E0314 00:13:35.276798 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d"} Mar 14 00:13:35.276884 kubelet[2624]: E0314 00:13:35.276824 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6ead6b1-357d-411f-8456-c605fe68bb57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.276884 kubelet[2624]: E0314 00:13:35.276845 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6ead6b1-357d-411f-8456-c605fe68bb57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7458dd48bf-crltd" podUID="d6ead6b1-357d-411f-8456-c605fe68bb57" Mar 14 00:13:35.277165 kubelet[2624]: E0314 00:13:35.277144 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:35.277270 kubelet[2624]: E0314 00:13:35.277253 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865"} Mar 14 00:13:35.277396 kubelet[2624]: E0314 00:13:35.277347 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"176d1ac9-bc75-42c6-9936-a88fc33155e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.277396 kubelet[2624]: E0314 00:13:35.277374 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"176d1ac9-bc75-42c6-9936-a88fc33155e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-w4qz7" podUID="176d1ac9-bc75-42c6-9936-a88fc33155e1" Mar 14 00:13:35.277901 containerd[1492]: time="2026-03-14T00:13:35.277867642Z" level=error msg="StopPodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" failed" error="failed to destroy network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.278304 kubelet[2624]: E0314 00:13:35.278143 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:35.278304 kubelet[2624]: E0314 00:13:35.278171 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c"} Mar 14 00:13:35.278304 kubelet[2624]: E0314 00:13:35.278196 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad64364e-d94a-400e-a2c3-7d753a27a0d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.278304 kubelet[2624]: E0314 00:13:35.278216 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad64364e-d94a-400e-a2c3-7d753a27a0d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" podUID="ad64364e-d94a-400e-a2c3-7d753a27a0d8" Mar 14 00:13:35.288847 containerd[1492]: time="2026-03-14T00:13:35.288802437Z" level=error msg="StopPodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" failed" error="failed to destroy network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:13:35.289498 kubelet[2624]: E0314 00:13:35.289016 2624 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:35.289498 kubelet[2624]: E0314 00:13:35.289056 2624 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7"} Mar 14 00:13:35.289498 kubelet[2624]: E0314 00:13:35.289083 2624 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"088db528-527b-4c1c-aa0e-fb534a4b3d53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:13:35.289498 kubelet[2624]: E0314 00:13:35.289105 2624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"088db528-527b-4c1c-aa0e-fb534a4b3d53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-d7568446c-55d6n" podUID="088db528-527b-4c1c-aa0e-fb534a4b3d53" Mar 14 00:13:35.301592 systemd[1]: Started cri-containerd-4a0e6d40d3dbaf34d9c91968ce5cb76ee72d3e13a5ce8d4b7b8709c7788cfc1c.scope - libcontainer container 4a0e6d40d3dbaf34d9c91968ce5cb76ee72d3e13a5ce8d4b7b8709c7788cfc1c. Mar 14 00:13:35.345839 containerd[1492]: time="2026-03-14T00:13:35.345580443Z" level=info msg="StartContainer for \"4a0e6d40d3dbaf34d9c91968ce5cb76ee72d3e13a5ce8d4b7b8709c7788cfc1c\" returns successfully" Mar 14 00:13:36.176925 containerd[1492]: time="2026-03-14T00:13:36.176578520Z" level=info msg="StopPodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\"" Mar 14 00:13:36.237770 kubelet[2624]: I0314 00:13:36.236854 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4rdfd" podStartSLOduration=3.599686245 podStartE2EDuration="15.236835242s" podCreationTimestamp="2026-03-14 00:13:21 +0000 UTC" firstStartedPulling="2026-03-14 00:13:21.668769235 +0000 UTC m=+25.873863744" lastFinishedPulling="2026-03-14 00:13:33.305918232 +0000 UTC m=+37.511012741" observedRunningTime="2026-03-14 00:13:36.235753027 +0000 UTC m=+40.440847536" watchObservedRunningTime="2026-03-14 00:13:36.236835242 +0000 UTC m=+40.441929711" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.277 [INFO][3841] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.278 [INFO][3841] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" iface="eth0" netns="/var/run/netns/cni-75c63d45-90be-2d28-b9af-d404aa7386dc" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.278 [INFO][3841] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" iface="eth0" netns="/var/run/netns/cni-75c63d45-90be-2d28-b9af-d404aa7386dc" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.278 [INFO][3841] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" iface="eth0" netns="/var/run/netns/cni-75c63d45-90be-2d28-b9af-d404aa7386dc" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.278 [INFO][3841] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.278 [INFO][3841] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.331 [INFO][3869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.331 [INFO][3869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.331 [INFO][3869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.344 [WARNING][3869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.344 [INFO][3869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.347 [INFO][3869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:36.354325 containerd[1492]: 2026-03-14 00:13:36.349 [INFO][3841] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:36.354696 systemd[1]: run-netns-cni\x2d75c63d45\x2d90be\x2d2d28\x2db9af\x2dd404aa7386dc.mount: Deactivated successfully. Mar 14 00:13:36.356854 containerd[1492]: time="2026-03-14T00:13:36.356269629Z" level=info msg="TearDown network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" successfully" Mar 14 00:13:36.356854 containerd[1492]: time="2026-03-14T00:13:36.356391111Z" level=info msg="StopPodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" returns successfully" Mar 14 00:13:36.431701 kubelet[2624]: I0314 00:13:36.431512 2624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-ca-bundle\") pod \"088db528-527b-4c1c-aa0e-fb534a4b3d53\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " Mar 14 00:13:36.431701 kubelet[2624]: I0314 00:13:36.431636 2624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-nginx-config\") pod \"088db528-527b-4c1c-aa0e-fb534a4b3d53\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " Mar 14 00:13:36.431701 kubelet[2624]: I0314 00:13:36.431676 2624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg9df\" (UniqueName: \"kubernetes.io/projected/088db528-527b-4c1c-aa0e-fb534a4b3d53-kube-api-access-mg9df\") pod \"088db528-527b-4c1c-aa0e-fb534a4b3d53\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " Mar 14 00:13:36.431701 kubelet[2624]: I0314 00:13:36.431704 2624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-backend-key-pair\") pod \"088db528-527b-4c1c-aa0e-fb534a4b3d53\" (UID: \"088db528-527b-4c1c-aa0e-fb534a4b3d53\") " Mar 14 00:13:36.433262 kubelet[2624]: I0314 00:13:36.432837 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "088db528-527b-4c1c-aa0e-fb534a4b3d53" (UID: "088db528-527b-4c1c-aa0e-fb534a4b3d53"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:13:36.433262 kubelet[2624]: I0314 00:13:36.433214 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "088db528-527b-4c1c-aa0e-fb534a4b3d53" (UID: "088db528-527b-4c1c-aa0e-fb534a4b3d53"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:13:36.439954 kubelet[2624]: I0314 00:13:36.438447 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "088db528-527b-4c1c-aa0e-fb534a4b3d53" (UID: "088db528-527b-4c1c-aa0e-fb534a4b3d53"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:13:36.439096 systemd[1]: var-lib-kubelet-pods-088db528\x2d527b\x2d4c1c\x2daa0e\x2dfb534a4b3d53-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:13:36.442455 kubelet[2624]: I0314 00:13:36.442418 2624 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088db528-527b-4c1c-aa0e-fb534a4b3d53-kube-api-access-mg9df" (OuterVolumeSpecName: "kube-api-access-mg9df") pod "088db528-527b-4c1c-aa0e-fb534a4b3d53" (UID: "088db528-527b-4c1c-aa0e-fb534a4b3d53"). InnerVolumeSpecName "kube-api-access-mg9df". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:13:36.442760 systemd[1]: var-lib-kubelet-pods-088db528\x2d527b\x2d4c1c\x2daa0e\x2dfb534a4b3d53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmg9df.mount: Deactivated successfully. Mar 14 00:13:36.532433 kubelet[2624]: I0314 00:13:36.532116 2624 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-ca-bundle\") on node \"ci-4081-3-6-n-8cab04691e\" DevicePath \"\"" Mar 14 00:13:36.532433 kubelet[2624]: I0314 00:13:36.532165 2624 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/088db528-527b-4c1c-aa0e-fb534a4b3d53-nginx-config\") on node \"ci-4081-3-6-n-8cab04691e\" DevicePath \"\"" Mar 14 00:13:36.532433 kubelet[2624]: I0314 00:13:36.532183 2624 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mg9df\" (UniqueName: \"kubernetes.io/projected/088db528-527b-4c1c-aa0e-fb534a4b3d53-kube-api-access-mg9df\") on node \"ci-4081-3-6-n-8cab04691e\" DevicePath \"\"" Mar 14 00:13:36.532433 kubelet[2624]: I0314 00:13:36.532200 2624 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/088db528-527b-4c1c-aa0e-fb534a4b3d53-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-8cab04691e\" DevicePath \"\"" Mar 14 00:13:37.150421 kernel: calico-node[3966]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:13:37.198304 systemd[1]: Removed slice kubepods-besteffort-pod088db528_527b_4c1c_aa0e_fb534a4b3d53.slice - libcontainer container kubepods-besteffort-pod088db528_527b_4c1c_aa0e_fb534a4b3d53.slice. Mar 14 00:13:37.338687 kubelet[2624]: I0314 00:13:37.338634 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9283e7e0-a364-4844-b0d2-7514d2052e4d-whisker-backend-key-pair\") pod \"whisker-5b5dcb4448-md5xt\" (UID: \"9283e7e0-a364-4844-b0d2-7514d2052e4d\") " pod="calico-system/whisker-5b5dcb4448-md5xt" Mar 14 00:13:37.338687 kubelet[2624]: I0314 00:13:37.338687 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9283e7e0-a364-4844-b0d2-7514d2052e4d-nginx-config\") pod \"whisker-5b5dcb4448-md5xt\" (UID: \"9283e7e0-a364-4844-b0d2-7514d2052e4d\") " pod="calico-system/whisker-5b5dcb4448-md5xt" Mar 14 00:13:37.361383 kubelet[2624]: I0314 00:13:37.338711 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9283e7e0-a364-4844-b0d2-7514d2052e4d-whisker-ca-bundle\") pod \"whisker-5b5dcb4448-md5xt\" (UID: \"9283e7e0-a364-4844-b0d2-7514d2052e4d\") " pod="calico-system/whisker-5b5dcb4448-md5xt" Mar 14 00:13:37.361383 kubelet[2624]: I0314 00:13:37.338726 2624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnj9c\" (UniqueName: \"kubernetes.io/projected/9283e7e0-a364-4844-b0d2-7514d2052e4d-kube-api-access-hnj9c\") pod \"whisker-5b5dcb4448-md5xt\" (UID: \"9283e7e0-a364-4844-b0d2-7514d2052e4d\") " pod="calico-system/whisker-5b5dcb4448-md5xt" Mar 14 00:13:37.361579 systemd[1]: Created slice kubepods-besteffort-pod9283e7e0_a364_4844_b0d2_7514d2052e4d.slice - libcontainer container kubepods-besteffort-pod9283e7e0_a364_4844_b0d2_7514d2052e4d.slice. Mar 14 00:13:37.668642 containerd[1492]: time="2026-03-14T00:13:37.668537565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b5dcb4448-md5xt,Uid:9283e7e0-a364-4844-b0d2-7514d2052e4d,Namespace:calico-system,Attempt:0,}" Mar 14 00:13:37.834439 systemd-networkd[1384]: vxlan.calico: Link UP Mar 14 00:13:37.834857 systemd-networkd[1384]: vxlan.calico: Gained carrier Mar 14 00:13:37.891699 systemd-networkd[1384]: cali942e2cc0a10: Link UP Mar 14 00:13:37.894005 systemd-networkd[1384]: cali942e2cc0a10: Gained carrier Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.735 [INFO][4029] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0 whisker-5b5dcb4448- calico-system 9283e7e0-a364-4844-b0d2-7514d2052e4d 926 0 2026-03-14 00:13:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b5dcb4448 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e whisker-5b5dcb4448-md5xt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali942e2cc0a10 [] [] }} ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.736 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.789 [INFO][4054] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" HandleID="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.805 [INFO][4054] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" HandleID="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"whisker-5b5dcb4448-md5xt", "timestamp":"2026-03-14 00:13:37.789881432 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030c6e0)} Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.805 [INFO][4054] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.805 [INFO][4054] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.805 [INFO][4054] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.810 [INFO][4054] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.821 [INFO][4054] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.829 [INFO][4054] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.834 [INFO][4054] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.843 [INFO][4054] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.844 [INFO][4054] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.849 [INFO][4054] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02 Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.860 [INFO][4054] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.868 [INFO][4054] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.129/26] block=192.168.104.128/26 handle="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.868 [INFO][4054] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.129/26] handle="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.869 [INFO][4054] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:37.927752 containerd[1492]: 2026-03-14 00:13:37.869 [INFO][4054] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.129/26] IPv6=[] ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" HandleID="k8s-pod-network.a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.928549 containerd[1492]: 2026-03-14 00:13:37.871 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0", GenerateName:"whisker-5b5dcb4448-", Namespace:"calico-system", SelfLink:"", UID:"9283e7e0-a364-4844-b0d2-7514d2052e4d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b5dcb4448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"whisker-5b5dcb4448-md5xt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali942e2cc0a10", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:37.928549 containerd[1492]: 2026-03-14 00:13:37.872 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.129/32] ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.928549 containerd[1492]: 2026-03-14 00:13:37.872 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali942e2cc0a10 ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.928549 containerd[1492]: 2026-03-14 00:13:37.895 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.928549 containerd[1492]: 2026-03-14 00:13:37.896 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0", GenerateName:"whisker-5b5dcb4448-", Namespace:"calico-system", SelfLink:"", UID:"9283e7e0-a364-4844-b0d2-7514d2052e4d", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b5dcb4448", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02", Pod:"whisker-5b5dcb4448-md5xt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali942e2cc0a10", MAC:"a2:e7:f3:05:27:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:37.928549 containerd[1492]: 2026-03-14 00:13:37.919 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02" Namespace="calico-system" Pod="whisker-5b5dcb4448-md5xt" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--5b5dcb4448--md5xt-eth0" Mar 14 00:13:37.963325 kubelet[2624]: I0314 00:13:37.962378 2624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088db528-527b-4c1c-aa0e-fb534a4b3d53" path="/var/lib/kubelet/pods/088db528-527b-4c1c-aa0e-fb534a4b3d53/volumes" Mar 14 00:13:37.967337 containerd[1492]: time="2026-03-14T00:13:37.966554420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:37.967337 containerd[1492]: time="2026-03-14T00:13:37.966613341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:37.967337 containerd[1492]: time="2026-03-14T00:13:37.966624621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:37.967337 containerd[1492]: time="2026-03-14T00:13:37.966716462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:37.992533 systemd[1]: Started cri-containerd-a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02.scope - libcontainer container a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02. Mar 14 00:13:38.035915 containerd[1492]: time="2026-03-14T00:13:38.035806444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b5dcb4448-md5xt,Uid:9283e7e0-a364-4844-b0d2-7514d2052e4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02\"" Mar 14 00:13:38.039301 containerd[1492]: time="2026-03-14T00:13:38.039064128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:13:38.953621 systemd-networkd[1384]: cali942e2cc0a10: Gained IPv6LL Mar 14 00:13:39.398700 containerd[1492]: time="2026-03-14T00:13:39.397686080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:39.400322 containerd[1492]: time="2026-03-14T00:13:39.400228954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 14 00:13:39.401737 containerd[1492]: time="2026-03-14T00:13:39.401540612Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:39.404521 containerd[1492]: time="2026-03-14T00:13:39.404432490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:39.406522 containerd[1492]: time="2026-03-14T00:13:39.405301422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.366176732s" Mar 14 00:13:39.406522 containerd[1492]: time="2026-03-14T00:13:39.405342542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 14 00:13:39.411516 containerd[1492]: time="2026-03-14T00:13:39.411479824Z" level=info msg="CreateContainer within sandbox \"a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:13:39.433712 containerd[1492]: time="2026-03-14T00:13:39.433521158Z" level=info msg="CreateContainer within sandbox \"a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4e88a3a0a4f0d3a76aa9c1bf580a9a597c2713966de7b56b280e4a52f70841f9\"" Mar 14 00:13:39.434615 containerd[1492]: time="2026-03-14T00:13:39.434508931Z" level=info msg="StartContainer for \"4e88a3a0a4f0d3a76aa9c1bf580a9a597c2713966de7b56b280e4a52f70841f9\"" Mar 14 00:13:39.471487 systemd[1]: run-containerd-runc-k8s.io-4e88a3a0a4f0d3a76aa9c1bf580a9a597c2713966de7b56b280e4a52f70841f9-runc.HNISIv.mount: Deactivated successfully. Mar 14 00:13:39.478460 systemd[1]: Started cri-containerd-4e88a3a0a4f0d3a76aa9c1bf580a9a597c2713966de7b56b280e4a52f70841f9.scope - libcontainer container 4e88a3a0a4f0d3a76aa9c1bf580a9a597c2713966de7b56b280e4a52f70841f9. Mar 14 00:13:39.516252 containerd[1492]: time="2026-03-14T00:13:39.516207981Z" level=info msg="StartContainer for \"4e88a3a0a4f0d3a76aa9c1bf580a9a597c2713966de7b56b280e4a52f70841f9\" returns successfully" Mar 14 00:13:39.518398 containerd[1492]: time="2026-03-14T00:13:39.518245728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:13:39.529476 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Mar 14 00:13:41.018048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3685689632.mount: Deactivated successfully. Mar 14 00:13:41.035324 containerd[1492]: time="2026-03-14T00:13:41.034985969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:41.036558 containerd[1492]: time="2026-03-14T00:13:41.036502149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 14 00:13:41.037669 containerd[1492]: time="2026-03-14T00:13:41.037533882Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:41.040299 containerd[1492]: time="2026-03-14T00:13:41.040089236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:41.041157 containerd[1492]: time="2026-03-14T00:13:41.041049928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.522742799s" Mar 14 00:13:41.041157 containerd[1492]: time="2026-03-14T00:13:41.041083328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 14 00:13:41.054306 containerd[1492]: time="2026-03-14T00:13:41.054225539Z" level=info msg="CreateContainer within sandbox \"a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:13:41.076037 containerd[1492]: time="2026-03-14T00:13:41.075675778Z" level=info msg="CreateContainer within sandbox \"a24f997451639b71b2f08d81b571698c29b0a7c0294e861cb047505eaea20f02\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c64a2dba28e8087c7b0d4e301e802bd952dd55f92ffaa4a5a16e7e46d44d12e1\"" Mar 14 00:13:41.076975 containerd[1492]: time="2026-03-14T00:13:41.076917714Z" level=info msg="StartContainer for \"c64a2dba28e8087c7b0d4e301e802bd952dd55f92ffaa4a5a16e7e46d44d12e1\"" Mar 14 00:13:41.124498 systemd[1]: Started cri-containerd-c64a2dba28e8087c7b0d4e301e802bd952dd55f92ffaa4a5a16e7e46d44d12e1.scope - libcontainer container c64a2dba28e8087c7b0d4e301e802bd952dd55f92ffaa4a5a16e7e46d44d12e1. Mar 14 00:13:41.164817 containerd[1492]: time="2026-03-14T00:13:41.164639093Z" level=info msg="StartContainer for \"c64a2dba28e8087c7b0d4e301e802bd952dd55f92ffaa4a5a16e7e46d44d12e1\" returns successfully" Mar 14 00:13:41.211358 kubelet[2624]: I0314 00:13:41.211025 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b5dcb4448-md5xt" podStartSLOduration=1.207690558 podStartE2EDuration="4.211007215s" podCreationTimestamp="2026-03-14 00:13:37 +0000 UTC" firstStartedPulling="2026-03-14 00:13:38.038807645 +0000 UTC m=+42.243902154" lastFinishedPulling="2026-03-14 00:13:41.042124302 +0000 UTC m=+45.247218811" observedRunningTime="2026-03-14 00:13:41.210354047 +0000 UTC m=+45.415448556" watchObservedRunningTime="2026-03-14 00:13:41.211007215 +0000 UTC m=+45.416101724" Mar 14 00:13:41.796013 systemd[1]: run-containerd-runc-k8s.io-c64a2dba28e8087c7b0d4e301e802bd952dd55f92ffaa4a5a16e7e46d44d12e1-runc.RAXRUB.mount: Deactivated successfully. Mar 14 00:13:47.944973 containerd[1492]: time="2026-03-14T00:13:47.942836271Z" level=info msg="StopPodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\"" Mar 14 00:13:47.944973 containerd[1492]: time="2026-03-14T00:13:47.944178127Z" level=info msg="StopPodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\"" Mar 14 00:13:47.946886 containerd[1492]: time="2026-03-14T00:13:47.946833039Z" level=info msg="StopPodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\"" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.030 [INFO][4326] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.030 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" iface="eth0" netns="/var/run/netns/cni-20b3dac9-33c0-6afe-812b-45ecddee06fc" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.030 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" iface="eth0" netns="/var/run/netns/cni-20b3dac9-33c0-6afe-812b-45ecddee06fc" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.030 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" iface="eth0" netns="/var/run/netns/cni-20b3dac9-33c0-6afe-812b-45ecddee06fc" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.030 [INFO][4326] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.030 [INFO][4326] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.061 [INFO][4346] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.062 [INFO][4346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.062 [INFO][4346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.076 [WARNING][4346] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.076 [INFO][4346] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.081 [INFO][4346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:48.084867 containerd[1492]: 2026-03-14 00:13:48.083 [INFO][4326] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:48.088580 containerd[1492]: time="2026-03-14T00:13:48.084975991Z" level=info msg="TearDown network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" successfully" Mar 14 00:13:48.088580 containerd[1492]: time="2026-03-14T00:13:48.085002231Z" level=info msg="StopPodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" returns successfully" Mar 14 00:13:48.089687 systemd[1]: run-netns-cni\x2d20b3dac9\x2d33c0\x2d6afe\x2d812b\x2d45ecddee06fc.mount: Deactivated successfully. Mar 14 00:13:48.093118 containerd[1492]: time="2026-03-14T00:13:48.092259839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k969,Uid:3c2769e1-ca6c-48f2-909e-e2592f4d7c1e,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.038 [INFO][4324] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.039 [INFO][4324] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" iface="eth0" netns="/var/run/netns/cni-cb6916cd-2561-c469-1c24-dade86dc52a4" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.039 [INFO][4324] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" iface="eth0" netns="/var/run/netns/cni-cb6916cd-2561-c469-1c24-dade86dc52a4" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.039 [INFO][4324] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" iface="eth0" netns="/var/run/netns/cni-cb6916cd-2561-c469-1c24-dade86dc52a4" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.039 [INFO][4324] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.039 [INFO][4324] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.077 [INFO][4353] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.078 [INFO][4353] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.082 [INFO][4353] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.096 [WARNING][4353] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.096 [INFO][4353] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.098 [INFO][4353] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:48.104185 containerd[1492]: 2026-03-14 00:13:48.101 [INFO][4324] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:48.104962 containerd[1492]: time="2026-03-14T00:13:48.104851871Z" level=info msg="TearDown network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" successfully" Mar 14 00:13:48.105145 containerd[1492]: time="2026-03-14T00:13:48.105128154Z" level=info msg="StopPodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" returns successfully" Mar 14 00:13:48.108184 systemd[1]: run-netns-cni\x2dcb6916cd\x2d2561\x2dc469\x2d1c24\x2ddade86dc52a4.mount: Deactivated successfully. Mar 14 00:13:48.111332 containerd[1492]: time="2026-03-14T00:13:48.111012105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-wjkkn,Uid:54468044-a1de-4bd2-ad46-1b29248bc3b5,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.036 [INFO][4325] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.036 [INFO][4325] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" iface="eth0" netns="/var/run/netns/cni-20316dce-b568-4281-dbdf-62128e4f637e" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.037 [INFO][4325] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" iface="eth0" netns="/var/run/netns/cni-20316dce-b568-4281-dbdf-62128e4f637e" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.038 [INFO][4325] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" iface="eth0" netns="/var/run/netns/cni-20316dce-b568-4281-dbdf-62128e4f637e" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.038 [INFO][4325] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.038 [INFO][4325] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.079 [INFO][4351] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.079 [INFO][4351] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.098 [INFO][4351] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.123 [WARNING][4351] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.123 [INFO][4351] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.127 [INFO][4351] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:48.154601 containerd[1492]: 2026-03-14 00:13:48.136 [INFO][4325] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:48.155231 containerd[1492]: time="2026-03-14T00:13:48.155106077Z" level=info msg="TearDown network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" successfully" Mar 14 00:13:48.155231 containerd[1492]: time="2026-03-14T00:13:48.155136877Z" level=info msg="StopPodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" returns successfully" Mar 14 00:13:48.159053 containerd[1492]: time="2026-03-14T00:13:48.158442717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w4qz7,Uid:176d1ac9-bc75-42c6-9936-a88fc33155e1,Namespace:kube-system,Attempt:1,}" Mar 14 00:13:48.337867 systemd-networkd[1384]: cali09dfdca86cd: Link UP Mar 14 00:13:48.340393 systemd-networkd[1384]: cali09dfdca86cd: Gained carrier Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.187 [INFO][4366] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0 csi-node-driver- calico-system 3c2769e1-ca6c-48f2-909e-e2592f4d7c1e 973 0 2026-03-14 00:13:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e csi-node-driver-4k969 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali09dfdca86cd [] [] }} ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.187 [INFO][4366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.261 [INFO][4399] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" HandleID="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.285 [INFO][4399] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" HandleID="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ea600), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"csi-node-driver-4k969", "timestamp":"2026-03-14 00:13:48.261212796 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001ee420)} Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.285 [INFO][4399] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.285 [INFO][4399] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.285 [INFO][4399] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.288 [INFO][4399] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.298 [INFO][4399] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.312 [INFO][4399] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.314 [INFO][4399] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.316 [INFO][4399] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.316 [INFO][4399] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.318 [INFO][4399] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.323 [INFO][4399] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.331 [INFO][4399] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.130/26] block=192.168.104.128/26 handle="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.331 [INFO][4399] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.130/26] handle="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.331 [INFO][4399] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:48.361677 containerd[1492]: 2026-03-14 00:13:48.331 [INFO][4399] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.130/26] IPv6=[] ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" HandleID="k8s-pod-network.3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.362245 containerd[1492]: 2026-03-14 00:13:48.335 [INFO][4366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"csi-node-driver-4k969", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09dfdca86cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:48.362245 containerd[1492]: 2026-03-14 00:13:48.335 [INFO][4366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.130/32] ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.362245 containerd[1492]: 2026-03-14 00:13:48.335 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09dfdca86cd ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.362245 containerd[1492]: 2026-03-14 00:13:48.340 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.362245 containerd[1492]: 2026-03-14 00:13:48.341 [INFO][4366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f", Pod:"csi-node-driver-4k969", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09dfdca86cd", MAC:"c2:cf:78:9a:d3:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:48.362245 containerd[1492]: 2026-03-14 00:13:48.357 [INFO][4366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f" Namespace="calico-system" Pod="csi-node-driver-4k969" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:48.386352 containerd[1492]: time="2026-03-14T00:13:48.384879928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:48.386352 containerd[1492]: time="2026-03-14T00:13:48.384980449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:48.386352 containerd[1492]: time="2026-03-14T00:13:48.385008049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:48.386352 containerd[1492]: time="2026-03-14T00:13:48.385116010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:48.412827 systemd[1]: Started cri-containerd-3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f.scope - libcontainer container 3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f. Mar 14 00:13:48.452346 systemd-networkd[1384]: caliaf471c7e5be: Link UP Mar 14 00:13:48.453462 systemd-networkd[1384]: caliaf471c7e5be: Gained carrier Mar 14 00:13:48.480442 containerd[1492]: time="2026-03-14T00:13:48.480405599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k969,Uid:3c2769e1-ca6c-48f2-909e-e2592f4d7c1e,Namespace:calico-system,Attempt:1,} returns sandbox id \"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f\"" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.195 [INFO][4376] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0 calico-apiserver-7458dd48bf- calico-system 54468044-a1de-4bd2-ad46-1b29248bc3b5 975 0 2026-03-14 00:13:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7458dd48bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e calico-apiserver-7458dd48bf-wjkkn eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliaf471c7e5be [] [] }} ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.196 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.277 [INFO][4404] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" HandleID="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.299 [INFO][4404] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" HandleID="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"calico-apiserver-7458dd48bf-wjkkn", "timestamp":"2026-03-14 00:13:48.277216629 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001e8dc0)} Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.299 [INFO][4404] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.331 [INFO][4404] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.331 [INFO][4404] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.390 [INFO][4404] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.400 [INFO][4404] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.412 [INFO][4404] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.418 [INFO][4404] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.423 [INFO][4404] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.423 [INFO][4404] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.426 [INFO][4404] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741 Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.433 [INFO][4404] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.442 [INFO][4404] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.131/26] block=192.168.104.128/26 handle="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.442 [INFO][4404] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.131/26] handle="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.443 [INFO][4404] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:48.485005 containerd[1492]: 2026-03-14 00:13:48.443 [INFO][4404] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.131/26] IPv6=[] ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" HandleID="k8s-pod-network.95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.485639 containerd[1492]: 2026-03-14 00:13:48.446 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"54468044-a1de-4bd2-ad46-1b29248bc3b5", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"calico-apiserver-7458dd48bf-wjkkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaf471c7e5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:48.485639 containerd[1492]: 2026-03-14 00:13:48.446 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.131/32] ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.485639 containerd[1492]: 2026-03-14 00:13:48.446 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf471c7e5be ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.485639 containerd[1492]: 2026-03-14 00:13:48.454 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.485639 containerd[1492]: 2026-03-14 00:13:48.454 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"54468044-a1de-4bd2-ad46-1b29248bc3b5", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741", Pod:"calico-apiserver-7458dd48bf-wjkkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaf471c7e5be", MAC:"fa:d2:03:da:f7:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:48.485639 containerd[1492]: 2026-03-14 00:13:48.477 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-wjkkn" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:48.496474 containerd[1492]: time="2026-03-14T00:13:48.493743840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:13:48.544099 containerd[1492]: time="2026-03-14T00:13:48.543795324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:48.544099 containerd[1492]: time="2026-03-14T00:13:48.543858725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:48.544099 containerd[1492]: time="2026-03-14T00:13:48.543869725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:48.544099 containerd[1492]: time="2026-03-14T00:13:48.543940246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:48.561387 systemd-networkd[1384]: cali5f24e6d2c99: Link UP Mar 14 00:13:48.562489 systemd-networkd[1384]: cali5f24e6d2c99: Gained carrier Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.252 [INFO][4389] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0 coredns-66bc5c9577- kube-system 176d1ac9-bc75-42c6-9936-a88fc33155e1 974 0 2026-03-14 00:13:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e coredns-66bc5c9577-w4qz7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f24e6d2c99 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.252 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.302 [INFO][4421] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" HandleID="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.313 [INFO][4421] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" HandleID="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb3e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"coredns-66bc5c9577-w4qz7", "timestamp":"2026-03-14 00:13:48.302337492 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000264dc0)} Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.313 [INFO][4421] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.443 [INFO][4421] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.443 [INFO][4421] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.491 [INFO][4421] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.504 [INFO][4421] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.515 [INFO][4421] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.518 [INFO][4421] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.523 [INFO][4421] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.523 [INFO][4421] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.528 [INFO][4421] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.535 [INFO][4421] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.550 [INFO][4421] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.132/26] block=192.168.104.128/26 handle="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.550 [INFO][4421] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.132/26] handle="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.550 [INFO][4421] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:48.587630 containerd[1492]: 2026-03-14 00:13:48.550 [INFO][4421] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.132/26] IPv6=[] ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" HandleID="k8s-pod-network.c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.589070 containerd[1492]: 2026-03-14 00:13:48.557 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"176d1ac9-bc75-42c6-9936-a88fc33155e1", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"coredns-66bc5c9577-w4qz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f24e6d2c99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:48.589070 containerd[1492]: 2026-03-14 00:13:48.557 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.132/32] ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.589070 containerd[1492]: 2026-03-14 00:13:48.557 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f24e6d2c99 ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.589070 containerd[1492]: 2026-03-14 00:13:48.559 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.589070 containerd[1492]: 2026-03-14 00:13:48.562 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"176d1ac9-bc75-42c6-9936-a88fc33155e1", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a", Pod:"coredns-66bc5c9577-w4qz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f24e6d2c99", MAC:"d2:04:58:21:4a:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:48.589597 containerd[1492]: 2026-03-14 00:13:48.584 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a" Namespace="kube-system" Pod="coredns-66bc5c9577-w4qz7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:48.602148 systemd[1]: Started cri-containerd-95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741.scope - libcontainer container 95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741. Mar 14 00:13:48.618036 containerd[1492]: time="2026-03-14T00:13:48.617817896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:48.618036 containerd[1492]: time="2026-03-14T00:13:48.617961178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:48.618036 containerd[1492]: time="2026-03-14T00:13:48.617996379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:48.619674 containerd[1492]: time="2026-03-14T00:13:48.619381155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:48.653491 systemd[1]: Started cri-containerd-c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a.scope - libcontainer container c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a. Mar 14 00:13:48.671969 containerd[1492]: time="2026-03-14T00:13:48.671841068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-wjkkn,Uid:54468044-a1de-4bd2-ad46-1b29248bc3b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741\"" Mar 14 00:13:48.708825 containerd[1492]: time="2026-03-14T00:13:48.708784033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-w4qz7,Uid:176d1ac9-bc75-42c6-9936-a88fc33155e1,Namespace:kube-system,Attempt:1,} returns sandbox id \"c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a\"" Mar 14 00:13:48.718311 containerd[1492]: time="2026-03-14T00:13:48.718133066Z" level=info msg="CreateContainer within sandbox \"c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:13:48.733710 containerd[1492]: time="2026-03-14T00:13:48.733572092Z" level=info msg="CreateContainer within sandbox \"c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aab7bc800b0d2a5300e5c53de6345f2f65cf55cb47ef6dd0be8910edf2839eda\"" Mar 14 00:13:48.735369 containerd[1492]: time="2026-03-14T00:13:48.735325753Z" level=info msg="StartContainer for \"aab7bc800b0d2a5300e5c53de6345f2f65cf55cb47ef6dd0be8910edf2839eda\"" Mar 14 00:13:48.763492 systemd[1]: Started cri-containerd-aab7bc800b0d2a5300e5c53de6345f2f65cf55cb47ef6dd0be8910edf2839eda.scope - libcontainer container aab7bc800b0d2a5300e5c53de6345f2f65cf55cb47ef6dd0be8910edf2839eda. Mar 14 00:13:48.795383 containerd[1492]: time="2026-03-14T00:13:48.794259904Z" level=info msg="StartContainer for \"aab7bc800b0d2a5300e5c53de6345f2f65cf55cb47ef6dd0be8910edf2839eda\" returns successfully" Mar 14 00:13:48.941518 containerd[1492]: time="2026-03-14T00:13:48.941344118Z" level=info msg="StopPodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\"" Mar 14 00:13:48.942079 containerd[1492]: time="2026-03-14T00:13:48.941810043Z" level=info msg="StopPodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\"" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.022 [INFO][4667] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.023 [INFO][4667] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" iface="eth0" netns="/var/run/netns/cni-da37144d-53fc-1b22-5380-16aabc0eec7e" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.024 [INFO][4667] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" iface="eth0" netns="/var/run/netns/cni-da37144d-53fc-1b22-5380-16aabc0eec7e" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.024 [INFO][4667] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" iface="eth0" netns="/var/run/netns/cni-da37144d-53fc-1b22-5380-16aabc0eec7e" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.024 [INFO][4667] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.024 [INFO][4667] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.063 [INFO][4679] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.063 [INFO][4679] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.063 [INFO][4679] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.078 [WARNING][4679] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.078 [INFO][4679] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.080 [INFO][4679] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:49.088789 containerd[1492]: 2026-03-14 00:13:49.084 [INFO][4667] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:49.092383 containerd[1492]: time="2026-03-14T00:13:49.091410358Z" level=info msg="TearDown network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" successfully" Mar 14 00:13:49.092383 containerd[1492]: time="2026-03-14T00:13:49.091457199Z" level=info msg="StopPodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" returns successfully" Mar 14 00:13:49.096852 containerd[1492]: time="2026-03-14T00:13:49.096813823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-crltd,Uid:d6ead6b1-357d-411f-8456-c605fe68bb57,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:49.098172 systemd[1]: run-netns-cni\x2d20316dce\x2db568\x2d4281\x2ddbdf\x2d62128e4f637e.mount: Deactivated successfully. Mar 14 00:13:49.103029 systemd[1]: run-netns-cni\x2dda37144d\x2d53fc\x2d1b22\x2d5380\x2d16aabc0eec7e.mount: Deactivated successfully. Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.041 [INFO][4666] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.041 [INFO][4666] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" iface="eth0" netns="/var/run/netns/cni-4d34ccf6-3e1f-785e-df91-4f2d81932642" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.042 [INFO][4666] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" iface="eth0" netns="/var/run/netns/cni-4d34ccf6-3e1f-785e-df91-4f2d81932642" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.042 [INFO][4666] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" iface="eth0" netns="/var/run/netns/cni-4d34ccf6-3e1f-785e-df91-4f2d81932642" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.042 [INFO][4666] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.042 [INFO][4666] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.079 [INFO][4684] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.079 [INFO][4684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.081 [INFO][4684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.103 [WARNING][4684] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.103 [INFO][4684] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.107 [INFO][4684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:49.112249 containerd[1492]: 2026-03-14 00:13:49.110 [INFO][4666] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:49.114049 containerd[1492]: time="2026-03-14T00:13:49.113996028Z" level=info msg="TearDown network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" successfully" Mar 14 00:13:49.114049 containerd[1492]: time="2026-03-14T00:13:49.114033189Z" level=info msg="StopPodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" returns successfully" Mar 14 00:13:49.116224 containerd[1492]: time="2026-03-14T00:13:49.115913371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pr7sg,Uid:cf9c5ce0-11b8-40fd-9752-8b6c4229fbea,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:49.116465 systemd[1]: run-netns-cni\x2d4d34ccf6\x2d3e1f\x2d785e\x2ddf91\x2d4f2d81932642.mount: Deactivated successfully. Mar 14 00:13:49.246124 kubelet[2624]: I0314 00:13:49.245974 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w4qz7" podStartSLOduration=47.245956206 podStartE2EDuration="47.245956206s" podCreationTimestamp="2026-03-14 00:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:49.24377382 +0000 UTC m=+53.448868329" watchObservedRunningTime="2026-03-14 00:13:49.245956206 +0000 UTC m=+53.451050715" Mar 14 00:13:49.339376 systemd-networkd[1384]: cali758b4d3cc4b: Link UP Mar 14 00:13:49.339630 systemd-networkd[1384]: cali758b4d3cc4b: Gained carrier Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.177 [INFO][4692] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0 calico-apiserver-7458dd48bf- calico-system d6ead6b1-357d-411f-8456-c605fe68bb57 994 0 2026-03-14 00:13:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7458dd48bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e calico-apiserver-7458dd48bf-crltd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali758b4d3cc4b [] [] }} ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.178 [INFO][4692] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.229 [INFO][4713] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" HandleID="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.248 [INFO][4713] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" HandleID="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ea140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"calico-apiserver-7458dd48bf-crltd", "timestamp":"2026-03-14 00:13:49.229106845 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010c2c0)} Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.249 [INFO][4713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.249 [INFO][4713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.249 [INFO][4713] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.258 [INFO][4713] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.281 [INFO][4713] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.309 [INFO][4713] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.313 [INFO][4713] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.316 [INFO][4713] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.316 [INFO][4713] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.318 [INFO][4713] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.325 [INFO][4713] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.332 [INFO][4713] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.133/26] block=192.168.104.128/26 handle="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.332 [INFO][4713] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.133/26] handle="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.332 [INFO][4713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:49.367583 containerd[1492]: 2026-03-14 00:13:49.333 [INFO][4713] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.133/26] IPv6=[] ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" HandleID="k8s-pod-network.d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.368185 containerd[1492]: 2026-03-14 00:13:49.336 [INFO][4692] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"d6ead6b1-357d-411f-8456-c605fe68bb57", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"calico-apiserver-7458dd48bf-crltd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali758b4d3cc4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:49.368185 containerd[1492]: 2026-03-14 00:13:49.336 [INFO][4692] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.133/32] ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.368185 containerd[1492]: 2026-03-14 00:13:49.336 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali758b4d3cc4b ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.368185 containerd[1492]: 2026-03-14 00:13:49.339 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.368185 containerd[1492]: 2026-03-14 00:13:49.348 [INFO][4692] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"d6ead6b1-357d-411f-8456-c605fe68bb57", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a", Pod:"calico-apiserver-7458dd48bf-crltd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali758b4d3cc4b", MAC:"aa:39:15:33:5a:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:49.368185 containerd[1492]: 2026-03-14 00:13:49.362 [INFO][4692] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a" Namespace="calico-system" Pod="calico-apiserver-7458dd48bf-crltd" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:49.396046 containerd[1492]: time="2026-03-14T00:13:49.395064309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:49.396046 containerd[1492]: time="2026-03-14T00:13:49.395130030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:49.396046 containerd[1492]: time="2026-03-14T00:13:49.395145950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:49.396046 containerd[1492]: time="2026-03-14T00:13:49.395235551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:49.418497 systemd[1]: Started cri-containerd-d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a.scope - libcontainer container d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a. Mar 14 00:13:49.450534 systemd-networkd[1384]: calif69d9053f48: Link UP Mar 14 00:13:49.453586 systemd-networkd[1384]: calif69d9053f48: Gained carrier Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.207 [INFO][4700] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0 goldmane-cccfbd5cf- calico-system cf9c5ce0-11b8-40fd-9752-8b6c4229fbea 995 0 2026-03-14 00:13:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e goldmane-cccfbd5cf-pr7sg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif69d9053f48 [] [] }} ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.207 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.289 [INFO][4723] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" HandleID="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.312 [INFO][4723] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" HandleID="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000272170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"goldmane-cccfbd5cf-pr7sg", "timestamp":"2026-03-14 00:13:49.289690209 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000252000)} Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.312 [INFO][4723] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.332 [INFO][4723] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.332 [INFO][4723] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.355 [INFO][4723] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.378 [INFO][4723] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.403 [INFO][4723] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.407 [INFO][4723] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.416 [INFO][4723] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.417 [INFO][4723] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.422 [INFO][4723] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.427 [INFO][4723] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.435 [INFO][4723] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.134/26] block=192.168.104.128/26 handle="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.435 [INFO][4723] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.134/26] handle="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.435 [INFO][4723] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:49.476422 containerd[1492]: 2026-03-14 00:13:49.435 [INFO][4723] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.134/26] IPv6=[] ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" HandleID="k8s-pod-network.5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.477810 containerd[1492]: 2026-03-14 00:13:49.440 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"goldmane-cccfbd5cf-pr7sg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif69d9053f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:49.477810 containerd[1492]: 2026-03-14 00:13:49.441 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.134/32] ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.477810 containerd[1492]: 2026-03-14 00:13:49.441 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif69d9053f48 ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.477810 containerd[1492]: 2026-03-14 00:13:49.453 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.477810 containerd[1492]: 2026-03-14 00:13:49.455 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d", Pod:"goldmane-cccfbd5cf-pr7sg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif69d9053f48", MAC:"e6:65:fe:2b:fe:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:49.477810 containerd[1492]: 2026-03-14 00:13:49.473 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d" Namespace="calico-system" Pod="goldmane-cccfbd5cf-pr7sg" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:49.506051 containerd[1492]: time="2026-03-14T00:13:49.505939115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7458dd48bf-crltd,Uid:d6ead6b1-357d-411f-8456-c605fe68bb57,Namespace:calico-system,Attempt:1,} returns sandbox id \"d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a\"" Mar 14 00:13:49.512186 containerd[1492]: time="2026-03-14T00:13:49.511377060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:49.512186 containerd[1492]: time="2026-03-14T00:13:49.512114228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:49.512186 containerd[1492]: time="2026-03-14T00:13:49.512156269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:49.512463 containerd[1492]: time="2026-03-14T00:13:49.512405552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:49.534533 systemd[1]: Started cri-containerd-5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d.scope - libcontainer container 5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d. Mar 14 00:13:49.596661 containerd[1492]: time="2026-03-14T00:13:49.596607959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-pr7sg,Uid:cf9c5ce0-11b8-40fd-9752-8b6c4229fbea,Namespace:calico-system,Attempt:1,} returns sandbox id \"5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d\"" Mar 14 00:13:49.644412 systemd-networkd[1384]: cali09dfdca86cd: Gained IPv6LL Mar 14 00:13:49.835423 systemd-networkd[1384]: caliaf471c7e5be: Gained IPv6LL Mar 14 00:13:49.922961 containerd[1492]: time="2026-03-14T00:13:49.922847740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:49.924290 containerd[1492]: time="2026-03-14T00:13:49.924097675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 14 00:13:49.925668 containerd[1492]: time="2026-03-14T00:13:49.925454931Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:49.929060 containerd[1492]: time="2026-03-14T00:13:49.928656569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:49.929682 containerd[1492]: time="2026-03-14T00:13:49.929647781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.43586026s" Mar 14 00:13:49.929828 containerd[1492]: time="2026-03-14T00:13:49.929809263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 14 00:13:49.932203 containerd[1492]: time="2026-03-14T00:13:49.932177691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:13:49.935593 containerd[1492]: time="2026-03-14T00:13:49.935508571Z" level=info msg="CreateContainer within sandbox \"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:13:49.955385 containerd[1492]: time="2026-03-14T00:13:49.954871603Z" level=info msg="StopPodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\"" Mar 14 00:13:49.963027 containerd[1492]: time="2026-03-14T00:13:49.962985860Z" level=info msg="CreateContainer within sandbox \"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c7ccb2d0a302a704f5b4738f4b2154c722ef6dd83057b391a0a4f74b48b027de\"" Mar 14 00:13:49.964421 containerd[1492]: time="2026-03-14T00:13:49.964381156Z" level=info msg="StartContainer for \"c7ccb2d0a302a704f5b4738f4b2154c722ef6dd83057b391a0a4f74b48b027de\"" Mar 14 00:13:49.997671 systemd[1]: Started cri-containerd-c7ccb2d0a302a704f5b4738f4b2154c722ef6dd83057b391a0a4f74b48b027de.scope - libcontainer container c7ccb2d0a302a704f5b4738f4b2154c722ef6dd83057b391a0a4f74b48b027de. Mar 14 00:13:50.025911 systemd-networkd[1384]: cali5f24e6d2c99: Gained IPv6LL Mar 14 00:13:50.048779 containerd[1492]: time="2026-03-14T00:13:50.048432677Z" level=info msg="StartContainer for \"c7ccb2d0a302a704f5b4738f4b2154c722ef6dd83057b391a0a4f74b48b027de\" returns successfully" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.048 [INFO][4868] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.049 [INFO][4868] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" iface="eth0" netns="/var/run/netns/cni-a53ba1f7-39fd-2e20-6592-e97f6973b81e" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.049 [INFO][4868] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" iface="eth0" netns="/var/run/netns/cni-a53ba1f7-39fd-2e20-6592-e97f6973b81e" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.050 [INFO][4868] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" iface="eth0" netns="/var/run/netns/cni-a53ba1f7-39fd-2e20-6592-e97f6973b81e" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.050 [INFO][4868] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.050 [INFO][4868] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.078 [INFO][4909] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.079 [INFO][4909] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.079 [INFO][4909] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.093 [WARNING][4909] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.093 [INFO][4909] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.096 [INFO][4909] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:50.101156 containerd[1492]: 2026-03-14 00:13:50.099 [INFO][4868] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:50.103345 containerd[1492]: time="2026-03-14T00:13:50.102086633Z" level=info msg="TearDown network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" successfully" Mar 14 00:13:50.103345 containerd[1492]: time="2026-03-14T00:13:50.102128234Z" level=info msg="StopPodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" returns successfully" Mar 14 00:13:50.105918 systemd[1]: run-netns-cni\x2da53ba1f7\x2d39fd\x2d2e20\x2d6592\x2de97f6973b81e.mount: Deactivated successfully. Mar 14 00:13:50.108814 containerd[1492]: time="2026-03-14T00:13:50.108738272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d24ss,Uid:5ccf3f92-1893-45dd-8984-7c1c3523f0d0,Namespace:kube-system,Attempt:1,}" Mar 14 00:13:50.265863 systemd-networkd[1384]: cali7172ef362bf: Link UP Mar 14 00:13:50.267447 systemd-networkd[1384]: cali7172ef362bf: Gained carrier Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.167 [INFO][4920] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0 coredns-66bc5c9577- kube-system 5ccf3f92-1893-45dd-8984-7c1c3523f0d0 1016 0 2026-03-14 00:13:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e coredns-66bc5c9577-d24ss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7172ef362bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.167 [INFO][4920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.198 [INFO][4933] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" HandleID="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.210 [INFO][4933] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" HandleID="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"coredns-66bc5c9577-d24ss", "timestamp":"2026-03-14 00:13:50.198948502 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003aa000)} Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.210 [INFO][4933] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.210 [INFO][4933] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.210 [INFO][4933] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.213 [INFO][4933] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.219 [INFO][4933] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.225 [INFO][4933] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.228 [INFO][4933] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.231 [INFO][4933] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.231 [INFO][4933] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.234 [INFO][4933] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498 Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.242 [INFO][4933] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.250 [INFO][4933] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.135/26] block=192.168.104.128/26 handle="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.250 [INFO][4933] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.135/26] handle="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.250 [INFO][4933] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:50.289475 containerd[1492]: 2026-03-14 00:13:50.251 [INFO][4933] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.135/26] IPv6=[] ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" HandleID="k8s-pod-network.e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.290919 containerd[1492]: 2026-03-14 00:13:50.253 [INFO][4920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5ccf3f92-1893-45dd-8984-7c1c3523f0d0", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"coredns-66bc5c9577-d24ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7172ef362bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:50.290919 containerd[1492]: 2026-03-14 00:13:50.254 [INFO][4920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.135/32] ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.290919 containerd[1492]: 2026-03-14 00:13:50.254 [INFO][4920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7172ef362bf ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.290919 containerd[1492]: 2026-03-14 00:13:50.267 [INFO][4920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.290919 containerd[1492]: 2026-03-14 00:13:50.268 [INFO][4920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5ccf3f92-1893-45dd-8984-7c1c3523f0d0", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498", Pod:"coredns-66bc5c9577-d24ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7172ef362bf", MAC:"fa:98:48:05:7b:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:50.291153 containerd[1492]: 2026-03-14 00:13:50.286 [INFO][4920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498" Namespace="kube-system" Pod="coredns-66bc5c9577-d24ss" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:50.323695 containerd[1492]: time="2026-03-14T00:13:50.322859882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:50.323695 containerd[1492]: time="2026-03-14T00:13:50.322927963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:50.323695 containerd[1492]: time="2026-03-14T00:13:50.322958564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.323695 containerd[1492]: time="2026-03-14T00:13:50.323053885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.363803 systemd[1]: Started cri-containerd-e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498.scope - libcontainer container e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498. Mar 14 00:13:50.412054 containerd[1492]: time="2026-03-14T00:13:50.411926587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-d24ss,Uid:5ccf3f92-1893-45dd-8984-7c1c3523f0d0,Namespace:kube-system,Attempt:1,} returns sandbox id \"e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498\"" Mar 14 00:13:50.421993 containerd[1492]: time="2026-03-14T00:13:50.421932191Z" level=info msg="CreateContainer within sandbox \"e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:13:50.444779 containerd[1492]: time="2026-03-14T00:13:50.444142867Z" level=info msg="CreateContainer within sandbox \"e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77db039087bd1517b81570a56bdf218214562d7e80bf3db6b267a3a2e9740c9d\"" Mar 14 00:13:50.446035 containerd[1492]: time="2026-03-14T00:13:50.445978289Z" level=info msg="StartContainer for \"77db039087bd1517b81570a56bdf218214562d7e80bf3db6b267a3a2e9740c9d\"" Mar 14 00:13:50.474933 systemd[1]: Started cri-containerd-77db039087bd1517b81570a56bdf218214562d7e80bf3db6b267a3a2e9740c9d.scope - libcontainer container 77db039087bd1517b81570a56bdf218214562d7e80bf3db6b267a3a2e9740c9d. Mar 14 00:13:50.506270 containerd[1492]: time="2026-03-14T00:13:50.506058515Z" level=info msg="StartContainer for \"77db039087bd1517b81570a56bdf218214562d7e80bf3db6b267a3a2e9740c9d\" returns successfully" Mar 14 00:13:50.601601 systemd-networkd[1384]: calif69d9053f48: Gained IPv6LL Mar 14 00:13:50.666252 systemd-networkd[1384]: cali758b4d3cc4b: Gained IPv6LL Mar 14 00:13:50.943337 containerd[1492]: time="2026-03-14T00:13:50.942076602Z" level=info msg="StopPodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\"" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.010 [INFO][5052] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.010 [INFO][5052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" iface="eth0" netns="/var/run/netns/cni-09ffb0fb-1684-2fef-47ae-047f7daaad19" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.011 [INFO][5052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" iface="eth0" netns="/var/run/netns/cni-09ffb0fb-1684-2fef-47ae-047f7daaad19" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.012 [INFO][5052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" iface="eth0" netns="/var/run/netns/cni-09ffb0fb-1684-2fef-47ae-047f7daaad19" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.012 [INFO][5052] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.012 [INFO][5052] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.037 [INFO][5059] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.038 [INFO][5059] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.038 [INFO][5059] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.050 [WARNING][5059] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.051 [INFO][5059] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.053 [INFO][5059] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:51.057927 containerd[1492]: 2026-03-14 00:13:51.055 [INFO][5052] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:51.057927 containerd[1492]: time="2026-03-14T00:13:51.057782321Z" level=info msg="TearDown network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" successfully" Mar 14 00:13:51.057927 containerd[1492]: time="2026-03-14T00:13:51.057808922Z" level=info msg="StopPodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" returns successfully" Mar 14 00:13:51.061457 containerd[1492]: time="2026-03-14T00:13:51.061416017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bdccb5d5-c59xx,Uid:ad64364e-d94a-400e-a2c3-7d753a27a0d8,Namespace:calico-system,Attempt:1,}" Mar 14 00:13:51.096414 systemd[1]: run-netns-cni\x2d09ffb0fb\x2d1684\x2d2fef\x2d47ae\x2d047f7daaad19.mount: Deactivated successfully. Mar 14 00:13:51.207069 systemd-networkd[1384]: cali7eda0dd7fa0: Link UP Mar 14 00:13:51.209807 systemd-networkd[1384]: cali7eda0dd7fa0: Gained carrier Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.111 [INFO][5065] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0 calico-kube-controllers-77bdccb5d5- calico-system ad64364e-d94a-400e-a2c3-7d753a27a0d8 1029 0 2026-03-14 00:13:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77bdccb5d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-8cab04691e calico-kube-controllers-77bdccb5d5-c59xx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7eda0dd7fa0 [] [] }} ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.111 [INFO][5065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.138 [INFO][5078] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" HandleID="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.151 [INFO][5078] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" HandleID="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed850), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-8cab04691e", "pod":"calico-kube-controllers-77bdccb5d5-c59xx", "timestamp":"2026-03-14 00:13:51.138151591 +0000 UTC"}, Hostname:"ci-4081-3-6-n-8cab04691e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b3080)} Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.152 [INFO][5078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.152 [INFO][5078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.152 [INFO][5078] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-8cab04691e' Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.157 [INFO][5078] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.163 [INFO][5078] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.170 [INFO][5078] ipam/ipam.go 526: Trying affinity for 192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.172 [INFO][5078] ipam/ipam.go 160: Attempting to load block cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.176 [INFO][5078] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.104.128/26 host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.176 [INFO][5078] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.104.128/26 handle="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.178 [INFO][5078] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.186 [INFO][5078] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.104.128/26 handle="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.196 [INFO][5078] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.104.136/26] block=192.168.104.128/26 handle="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.196 [INFO][5078] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.104.136/26] handle="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" host="ci-4081-3-6-n-8cab04691e" Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.196 [INFO][5078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:51.238224 containerd[1492]: 2026-03-14 00:13:51.196 [INFO][5078] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.104.136/26] IPv6=[] ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" HandleID="k8s-pod-network.381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.240004 containerd[1492]: 2026-03-14 00:13:51.202 [INFO][5065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0", GenerateName:"calico-kube-controllers-77bdccb5d5-", Namespace:"calico-system", SelfLink:"", UID:"ad64364e-d94a-400e-a2c3-7d753a27a0d8", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bdccb5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"", Pod:"calico-kube-controllers-77bdccb5d5-c59xx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7eda0dd7fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:51.240004 containerd[1492]: 2026-03-14 00:13:51.203 [INFO][5065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.136/32] ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.240004 containerd[1492]: 2026-03-14 00:13:51.203 [INFO][5065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7eda0dd7fa0 ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.240004 containerd[1492]: 2026-03-14 00:13:51.210 [INFO][5065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.240004 containerd[1492]: 2026-03-14 00:13:51.211 [INFO][5065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0", GenerateName:"calico-kube-controllers-77bdccb5d5-", Namespace:"calico-system", SelfLink:"", UID:"ad64364e-d94a-400e-a2c3-7d753a27a0d8", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bdccb5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e", Pod:"calico-kube-controllers-77bdccb5d5-c59xx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7eda0dd7fa0", MAC:"2a:5d:65:c5:b6:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:51.240004 containerd[1492]: 2026-03-14 00:13:51.232 [INFO][5065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e" Namespace="calico-system" Pod="calico-kube-controllers-77bdccb5d5-c59xx" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:51.286075 containerd[1492]: time="2026-03-14T00:13:51.284458869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:51.286075 containerd[1492]: time="2026-03-14T00:13:51.284584791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:51.286075 containerd[1492]: time="2026-03-14T00:13:51.284619192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:51.286075 containerd[1492]: time="2026-03-14T00:13:51.284715473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:51.334444 kubelet[2624]: I0314 00:13:51.334353 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d24ss" podStartSLOduration=49.334332153 podStartE2EDuration="49.334332153s" podCreationTimestamp="2026-03-14 00:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:51.298822609 +0000 UTC m=+55.503917158" watchObservedRunningTime="2026-03-14 00:13:51.334332153 +0000 UTC m=+55.539426702" Mar 14 00:13:51.344593 systemd[1]: Started cri-containerd-381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e.scope - libcontainer container 381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e. Mar 14 00:13:51.425126 containerd[1492]: time="2026-03-14T00:13:51.425087861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77bdccb5d5-c59xx,Uid:ad64364e-d94a-400e-a2c3-7d753a27a0d8,Namespace:calico-system,Attempt:1,} returns sandbox id \"381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e\"" Mar 14 00:13:52.102911 containerd[1492]: time="2026-03-14T00:13:52.102840897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.104688 containerd[1492]: time="2026-03-14T00:13:52.104626284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 14 00:13:52.105795 containerd[1492]: time="2026-03-14T00:13:52.105206813Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.110326 containerd[1492]: time="2026-03-14T00:13:52.110194449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.112284 containerd[1492]: time="2026-03-14T00:13:52.111848354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.1794859s" Mar 14 00:13:52.112284 containerd[1492]: time="2026-03-14T00:13:52.111888955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:13:52.114838 containerd[1492]: time="2026-03-14T00:13:52.114798039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:13:52.119184 containerd[1492]: time="2026-03-14T00:13:52.118993382Z" level=info msg="CreateContainer within sandbox \"95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:13:52.132930 containerd[1492]: time="2026-03-14T00:13:52.132861033Z" level=info msg="CreateContainer within sandbox \"95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e482b386588bf38e32a2c15c9067337c8171b5fbff387f9e24afd34112f06b1\"" Mar 14 00:13:52.134324 containerd[1492]: time="2026-03-14T00:13:52.133767166Z" level=info msg="StartContainer for \"9e482b386588bf38e32a2c15c9067337c8171b5fbff387f9e24afd34112f06b1\"" Mar 14 00:13:52.175447 systemd[1]: Started cri-containerd-9e482b386588bf38e32a2c15c9067337c8171b5fbff387f9e24afd34112f06b1.scope - libcontainer container 9e482b386588bf38e32a2c15c9067337c8171b5fbff387f9e24afd34112f06b1. Mar 14 00:13:52.212757 containerd[1492]: time="2026-03-14T00:13:52.212710764Z" level=info msg="StartContainer for \"9e482b386588bf38e32a2c15c9067337c8171b5fbff387f9e24afd34112f06b1\" returns successfully" Mar 14 00:13:52.267420 systemd-networkd[1384]: cali7172ef362bf: Gained IPv6LL Mar 14 00:13:52.296739 kubelet[2624]: I0314 00:13:52.296600 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7458dd48bf-wjkkn" podStartSLOduration=29.856976387 podStartE2EDuration="33.296585835s" podCreationTimestamp="2026-03-14 00:13:19 +0000 UTC" firstStartedPulling="2026-03-14 00:13:48.674202016 +0000 UTC m=+52.879296525" lastFinishedPulling="2026-03-14 00:13:52.113811464 +0000 UTC m=+56.318905973" observedRunningTime="2026-03-14 00:13:52.29624747 +0000 UTC m=+56.501342019" watchObservedRunningTime="2026-03-14 00:13:52.296585835 +0000 UTC m=+56.501680344" Mar 14 00:13:52.513327 containerd[1492]: time="2026-03-14T00:13:52.512550431Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:52.513327 containerd[1492]: time="2026-03-14T00:13:52.513188360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:13:52.516619 containerd[1492]: time="2026-03-14T00:13:52.516402929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 401.532889ms" Mar 14 00:13:52.516619 containerd[1492]: time="2026-03-14T00:13:52.516490930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:13:52.517945 containerd[1492]: time="2026-03-14T00:13:52.517852031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:13:52.524450 containerd[1492]: time="2026-03-14T00:13:52.524211767Z" level=info msg="CreateContainer within sandbox \"d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:13:52.553419 containerd[1492]: time="2026-03-14T00:13:52.553238688Z" level=info msg="CreateContainer within sandbox \"d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8a19b0a3851ab06a7da09f4582bd1b939976857761d579414265cf41cbfc3fb6\"" Mar 14 00:13:52.554615 containerd[1492]: time="2026-03-14T00:13:52.554163342Z" level=info msg="StartContainer for \"8a19b0a3851ab06a7da09f4582bd1b939976857761d579414265cf41cbfc3fb6\"" Mar 14 00:13:52.585512 systemd-networkd[1384]: cali7eda0dd7fa0: Gained IPv6LL Mar 14 00:13:52.606481 systemd[1]: Started cri-containerd-8a19b0a3851ab06a7da09f4582bd1b939976857761d579414265cf41cbfc3fb6.scope - libcontainer container 8a19b0a3851ab06a7da09f4582bd1b939976857761d579414265cf41cbfc3fb6. Mar 14 00:13:52.652333 containerd[1492]: time="2026-03-14T00:13:52.650254959Z" level=info msg="StartContainer for \"8a19b0a3851ab06a7da09f4582bd1b939976857761d579414265cf41cbfc3fb6\" returns successfully" Mar 14 00:13:53.289322 kubelet[2624]: I0314 00:13:53.289133 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:54.457890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448084359.mount: Deactivated successfully. Mar 14 00:13:54.873415 kubelet[2624]: I0314 00:13:54.871587 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:13:55.031998 kubelet[2624]: I0314 00:13:55.031545 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7458dd48bf-crltd" podStartSLOduration=33.022881274 podStartE2EDuration="36.030992223s" podCreationTimestamp="2026-03-14 00:13:19 +0000 UTC" firstStartedPulling="2026-03-14 00:13:49.509134873 +0000 UTC m=+53.714229382" lastFinishedPulling="2026-03-14 00:13:52.517245822 +0000 UTC m=+56.722340331" observedRunningTime="2026-03-14 00:13:53.302243647 +0000 UTC m=+57.507338156" watchObservedRunningTime="2026-03-14 00:13:55.030992223 +0000 UTC m=+59.236086732" Mar 14 00:13:55.099157 containerd[1492]: time="2026-03-14T00:13:55.098318058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:55.099929 containerd[1492]: time="2026-03-14T00:13:55.099894562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 14 00:13:55.100292 containerd[1492]: time="2026-03-14T00:13:55.100248327Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:55.103678 containerd[1492]: time="2026-03-14T00:13:55.103638697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:55.105031 containerd[1492]: time="2026-03-14T00:13:55.104985237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.587093846s" Mar 14 00:13:55.105031 containerd[1492]: time="2026-03-14T00:13:55.105026797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 14 00:13:55.110686 containerd[1492]: time="2026-03-14T00:13:55.110650681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:13:55.113415 containerd[1492]: time="2026-03-14T00:13:55.113367401Z" level=info msg="CreateContainer within sandbox \"5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:13:55.143796 containerd[1492]: time="2026-03-14T00:13:55.143455326Z" level=info msg="CreateContainer within sandbox \"5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"c831ed36b833faa31d11d174a78f9b2676fee541c958765b1f5743955bb593bc\"" Mar 14 00:13:55.146294 containerd[1492]: time="2026-03-14T00:13:55.146006643Z" level=info msg="StartContainer for \"c831ed36b833faa31d11d174a78f9b2676fee541c958765b1f5743955bb593bc\"" Mar 14 00:13:55.228504 systemd[1]: Started cri-containerd-c831ed36b833faa31d11d174a78f9b2676fee541c958765b1f5743955bb593bc.scope - libcontainer container c831ed36b833faa31d11d174a78f9b2676fee541c958765b1f5743955bb593bc. Mar 14 00:13:55.278681 containerd[1492]: time="2026-03-14T00:13:55.278569683Z" level=info msg="StartContainer for \"c831ed36b833faa31d11d174a78f9b2676fee541c958765b1f5743955bb593bc\" returns successfully" Mar 14 00:13:55.416387 kubelet[2624]: I0314 00:13:55.415458 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-pr7sg" podStartSLOduration=29.907380186 podStartE2EDuration="35.415438827s" podCreationTimestamp="2026-03-14 00:13:20 +0000 UTC" firstStartedPulling="2026-03-14 00:13:49.599471713 +0000 UTC m=+53.804566182" lastFinishedPulling="2026-03-14 00:13:55.107530274 +0000 UTC m=+59.312624823" observedRunningTime="2026-03-14 00:13:55.317184694 +0000 UTC m=+59.522279203" watchObservedRunningTime="2026-03-14 00:13:55.415438827 +0000 UTC m=+59.620533336" Mar 14 00:13:55.931240 containerd[1492]: time="2026-03-14T00:13:55.930782366Z" level=info msg="StopPodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\"" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:55.990 [WARNING][5324] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0", GenerateName:"calico-kube-controllers-77bdccb5d5-", Namespace:"calico-system", SelfLink:"", UID:"ad64364e-d94a-400e-a2c3-7d753a27a0d8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bdccb5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e", Pod:"calico-kube-controllers-77bdccb5d5-c59xx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7eda0dd7fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:55.991 [INFO][5324] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:55.991 [INFO][5324] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" iface="eth0" netns="" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:55.991 [INFO][5324] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:55.991 [INFO][5324] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.028 [INFO][5333] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.028 [INFO][5333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.029 [INFO][5333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.042 [WARNING][5333] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.042 [INFO][5333] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.044 [INFO][5333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.050695 containerd[1492]: 2026-03-14 00:13:56.047 [INFO][5324] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.052205 containerd[1492]: time="2026-03-14T00:13:56.050744173Z" level=info msg="TearDown network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" successfully" Mar 14 00:13:56.052205 containerd[1492]: time="2026-03-14T00:13:56.050770134Z" level=info msg="StopPodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" returns successfully" Mar 14 00:13:56.052205 containerd[1492]: time="2026-03-14T00:13:56.052045992Z" level=info msg="RemovePodSandbox for \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\"" Mar 14 00:13:56.059044 containerd[1492]: time="2026-03-14T00:13:56.058979014Z" level=info msg="Forcibly stopping sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\"" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.101 [WARNING][5348] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0", GenerateName:"calico-kube-controllers-77bdccb5d5-", Namespace:"calico-system", SelfLink:"", UID:"ad64364e-d94a-400e-a2c3-7d753a27a0d8", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77bdccb5d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e", Pod:"calico-kube-controllers-77bdccb5d5-c59xx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7eda0dd7fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.101 [INFO][5348] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.101 [INFO][5348] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" iface="eth0" netns="" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.101 [INFO][5348] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.101 [INFO][5348] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.127 [INFO][5355] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.127 [INFO][5355] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.127 [INFO][5355] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.142 [WARNING][5355] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.142 [INFO][5355] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" HandleID="k8s-pod-network.431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--kube--controllers--77bdccb5d5--c59xx-eth0" Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.145 [INFO][5355] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.152449 containerd[1492]: 2026-03-14 00:13:56.149 [INFO][5348] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c" Mar 14 00:13:56.152449 containerd[1492]: time="2026-03-14T00:13:56.152484825Z" level=info msg="TearDown network for sandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" successfully" Mar 14 00:13:56.186395 containerd[1492]: time="2026-03-14T00:13:56.185533630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:56.186395 containerd[1492]: time="2026-03-14T00:13:56.185626751Z" level=info msg="RemovePodSandbox \"431b5d504a0e39dbd5b17f63da6aeece613eb6d939be3eeffb33ea9734c0055c\" returns successfully" Mar 14 00:13:56.186395 containerd[1492]: time="2026-03-14T00:13:56.186340962Z" level=info msg="StopPodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\"" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.234 [WARNING][5376] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"54468044-a1de-4bd2-ad46-1b29248bc3b5", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741", Pod:"calico-apiserver-7458dd48bf-wjkkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaf471c7e5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.235 [INFO][5376] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.235 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" iface="eth0" netns="" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.235 [INFO][5376] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.235 [INFO][5376] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.257 [INFO][5383] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.257 [INFO][5383] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.257 [INFO][5383] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.269 [WARNING][5383] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.270 [INFO][5383] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.272 [INFO][5383] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.278598 containerd[1492]: 2026-03-14 00:13:56.275 [INFO][5376] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.279246 containerd[1492]: time="2026-03-14T00:13:56.278833758Z" level=info msg="TearDown network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" successfully" Mar 14 00:13:56.279246 containerd[1492]: time="2026-03-14T00:13:56.278860119Z" level=info msg="StopPodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" returns successfully" Mar 14 00:13:56.279463 containerd[1492]: time="2026-03-14T00:13:56.279428047Z" level=info msg="RemovePodSandbox for \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\"" Mar 14 00:13:56.279519 containerd[1492]: time="2026-03-14T00:13:56.279471368Z" level=info msg="Forcibly stopping sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\"" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.335 [WARNING][5397] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"54468044-a1de-4bd2-ad46-1b29248bc3b5", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"95ed8d44f95522ccba64e28165e21bbbcbfe4ba95c759b3914a8262052fa6741", Pod:"calico-apiserver-7458dd48bf-wjkkn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliaf471c7e5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.335 [INFO][5397] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.335 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" iface="eth0" netns="" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.335 [INFO][5397] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.335 [INFO][5397] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.363 [INFO][5419] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.365 [INFO][5419] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.365 [INFO][5419] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.375 [WARNING][5419] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.375 [INFO][5419] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" HandleID="k8s-pod-network.5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--wjkkn-eth0" Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.377 [INFO][5419] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.384543 containerd[1492]: 2026-03-14 00:13:56.381 [INFO][5397] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb" Mar 14 00:13:56.384543 containerd[1492]: time="2026-03-14T00:13:56.384493628Z" level=info msg="TearDown network for sandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" successfully" Mar 14 00:13:56.391814 containerd[1492]: time="2026-03-14T00:13:56.391762294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:56.391950 containerd[1492]: time="2026-03-14T00:13:56.391870136Z" level=info msg="RemovePodSandbox \"5da35242f59324c40737e6c99a1f9c46b07200d3b1ecbb0b0dbe38cfa1b898eb\" returns successfully" Mar 14 00:13:56.394256 containerd[1492]: time="2026-03-14T00:13:56.393721603Z" level=info msg="StopPodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\"" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.447 [WARNING][5437] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"d6ead6b1-357d-411f-8456-c605fe68bb57", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a", Pod:"calico-apiserver-7458dd48bf-crltd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali758b4d3cc4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.448 [INFO][5437] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.448 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" iface="eth0" netns="" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.448 [INFO][5437] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.448 [INFO][5437] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.480 [INFO][5448] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.480 [INFO][5448] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.480 [INFO][5448] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.492 [WARNING][5448] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.492 [INFO][5448] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.494 [INFO][5448] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.500386 containerd[1492]: 2026-03-14 00:13:56.498 [INFO][5437] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.502055 containerd[1492]: time="2026-03-14T00:13:56.501894829Z" level=info msg="TearDown network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" successfully" Mar 14 00:13:56.502055 containerd[1492]: time="2026-03-14T00:13:56.501926950Z" level=info msg="StopPodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" returns successfully" Mar 14 00:13:56.502811 containerd[1492]: time="2026-03-14T00:13:56.502749722Z" level=info msg="RemovePodSandbox for \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\"" Mar 14 00:13:56.503193 containerd[1492]: time="2026-03-14T00:13:56.502886004Z" level=info msg="Forcibly stopping sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\"" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.555 [WARNING][5464] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0", GenerateName:"calico-apiserver-7458dd48bf-", Namespace:"calico-system", SelfLink:"", UID:"d6ead6b1-357d-411f-8456-c605fe68bb57", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7458dd48bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"d2b04bec9834c50cba0c4f330464965e878c4d18db9576a4872b4b794f4cfc7a", Pod:"calico-apiserver-7458dd48bf-crltd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali758b4d3cc4b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.555 [INFO][5464] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.555 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" iface="eth0" netns="" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.555 [INFO][5464] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.555 [INFO][5464] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.580 [INFO][5472] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.580 [INFO][5472] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.580 [INFO][5472] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.591 [WARNING][5472] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.592 [INFO][5472] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" HandleID="k8s-pod-network.66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Workload="ci--4081--3--6--n--8cab04691e-k8s-calico--apiserver--7458dd48bf--crltd-eth0" Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.593 [INFO][5472] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.602807 containerd[1492]: 2026-03-14 00:13:56.598 [INFO][5464] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d" Mar 14 00:13:56.603979 containerd[1492]: time="2026-03-14T00:13:56.603406998Z" level=info msg="TearDown network for sandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" successfully" Mar 14 00:13:56.608424 containerd[1492]: time="2026-03-14T00:13:56.608369111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:56.608549 containerd[1492]: time="2026-03-14T00:13:56.608484953Z" level=info msg="RemovePodSandbox \"66ce6456e2fb033fd54acc367a938a694fbeaacb86f305e1cc1ec401c2e8a16d\" returns successfully" Mar 14 00:13:56.609536 containerd[1492]: time="2026-03-14T00:13:56.609484807Z" level=info msg="StopPodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\"" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.652 [WARNING][5487] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5ccf3f92-1893-45dd-8984-7c1c3523f0d0", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498", Pod:"coredns-66bc5c9577-d24ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7172ef362bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.653 [INFO][5487] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.653 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" iface="eth0" netns="" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.653 [INFO][5487] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.653 [INFO][5487] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.678 [INFO][5494] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.678 [INFO][5494] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.678 [INFO][5494] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.692 [WARNING][5494] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.692 [INFO][5494] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.694 [INFO][5494] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.699307 containerd[1492]: 2026-03-14 00:13:56.697 [INFO][5487] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.699307 containerd[1492]: time="2026-03-14T00:13:56.699170002Z" level=info msg="TearDown network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" successfully" Mar 14 00:13:56.699307 containerd[1492]: time="2026-03-14T00:13:56.699195203Z" level=info msg="StopPodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" returns successfully" Mar 14 00:13:56.701062 containerd[1492]: time="2026-03-14T00:13:56.700061056Z" level=info msg="RemovePodSandbox for \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\"" Mar 14 00:13:56.701062 containerd[1492]: time="2026-03-14T00:13:56.700092256Z" level=info msg="Forcibly stopping sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\"" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.743 [WARNING][5508] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5ccf3f92-1893-45dd-8984-7c1c3523f0d0", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"e33e287a58706a0e50f25cd8fe656c875ed9b4cf69f116b3b0e1dba158db4498", Pod:"coredns-66bc5c9577-d24ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7172ef362bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.746 [INFO][5508] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.746 [INFO][5508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" iface="eth0" netns="" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.746 [INFO][5508] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.746 [INFO][5508] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.768 [INFO][5515] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.768 [INFO][5515] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.768 [INFO][5515] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.780 [WARNING][5515] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.780 [INFO][5515] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" HandleID="k8s-pod-network.8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--d24ss-eth0" Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.782 [INFO][5515] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.787672 containerd[1492]: 2026-03-14 00:13:56.785 [INFO][5508] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43" Mar 14 00:13:56.787672 containerd[1492]: time="2026-03-14T00:13:56.787538658Z" level=info msg="TearDown network for sandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" successfully" Mar 14 00:13:56.791491 containerd[1492]: time="2026-03-14T00:13:56.791417915Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:56.791643 containerd[1492]: time="2026-03-14T00:13:56.791510837Z" level=info msg="RemovePodSandbox \"8e8a0d69a71dbb9fdece237a879f17314232bbedafca70d4f19357fcf776fb43\" returns successfully" Mar 14 00:13:56.793076 containerd[1492]: time="2026-03-14T00:13:56.792225167Z" level=info msg="StopPodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\"" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.838 [WARNING][5529] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"176d1ac9-bc75-42c6-9936-a88fc33155e1", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a", Pod:"coredns-66bc5c9577-w4qz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f24e6d2c99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.839 [INFO][5529] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.839 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" iface="eth0" netns="" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.839 [INFO][5529] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.839 [INFO][5529] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.865 [INFO][5536] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.866 [INFO][5536] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.866 [INFO][5536] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.877 [WARNING][5536] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.877 [INFO][5536] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.879 [INFO][5536] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.884942 containerd[1492]: 2026-03-14 00:13:56.882 [INFO][5529] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.885606 containerd[1492]: time="2026-03-14T00:13:56.884963527Z" level=info msg="TearDown network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" successfully" Mar 14 00:13:56.885606 containerd[1492]: time="2026-03-14T00:13:56.884989248Z" level=info msg="StopPodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" returns successfully" Mar 14 00:13:56.885606 containerd[1492]: time="2026-03-14T00:13:56.885463934Z" level=info msg="RemovePodSandbox for \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\"" Mar 14 00:13:56.885606 containerd[1492]: time="2026-03-14T00:13:56.885489855Z" level=info msg="Forcibly stopping sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\"" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.930 [WARNING][5551] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"176d1ac9-bc75-42c6-9936-a88fc33155e1", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"c4173897aa875c41efc2770ad0ed69b2e9d37919cd75efa086e160bacdc54e4a", Pod:"coredns-66bc5c9577-w4qz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f24e6d2c99", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.930 [INFO][5551] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.930 [INFO][5551] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" iface="eth0" netns="" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.931 [INFO][5551] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.931 [INFO][5551] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.959 [INFO][5558] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.959 [INFO][5558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.959 [INFO][5558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.972 [WARNING][5558] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.972 [INFO][5558] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" HandleID="k8s-pod-network.47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Workload="ci--4081--3--6--n--8cab04691e-k8s-coredns--66bc5c9577--w4qz7-eth0" Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.974 [INFO][5558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:56.980545 containerd[1492]: 2026-03-14 00:13:56.976 [INFO][5551] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865" Mar 14 00:13:56.981359 containerd[1492]: time="2026-03-14T00:13:56.980601370Z" level=info msg="TearDown network for sandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" successfully" Mar 14 00:13:56.984312 containerd[1492]: time="2026-03-14T00:13:56.984250783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:56.984434 containerd[1492]: time="2026-03-14T00:13:56.984351905Z" level=info msg="RemovePodSandbox \"47a9d8d7d78a9c6d6d13b85e45684633f3835c86ce03ce59f4b6f8f3bf560865\" returns successfully" Mar 14 00:13:56.985180 containerd[1492]: time="2026-03-14T00:13:56.985148476Z" level=info msg="StopPodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\"" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.035 [WARNING][5572] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.035 [INFO][5572] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.035 [INFO][5572] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" iface="eth0" netns="" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.035 [INFO][5572] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.035 [INFO][5572] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.058 [INFO][5580] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.058 [INFO][5580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.058 [INFO][5580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.071 [WARNING][5580] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.071 [INFO][5580] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.073 [INFO][5580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.080992 containerd[1492]: 2026-03-14 00:13:57.076 [INFO][5572] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.080992 containerd[1492]: time="2026-03-14T00:13:57.080757589Z" level=info msg="TearDown network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" successfully" Mar 14 00:13:57.080992 containerd[1492]: time="2026-03-14T00:13:57.080797670Z" level=info msg="StopPodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" returns successfully" Mar 14 00:13:57.082560 containerd[1492]: time="2026-03-14T00:13:57.081580521Z" level=info msg="RemovePodSandbox for \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\"" Mar 14 00:13:57.082560 containerd[1492]: time="2026-03-14T00:13:57.081672323Z" level=info msg="Forcibly stopping sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\"" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.130 [WARNING][5594] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" WorkloadEndpoint="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.130 [INFO][5594] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.130 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" iface="eth0" netns="" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.130 [INFO][5594] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.130 [INFO][5594] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.157 [INFO][5601] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.157 [INFO][5601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.157 [INFO][5601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.170 [WARNING][5601] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.170 [INFO][5601] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" HandleID="k8s-pod-network.f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Workload="ci--4081--3--6--n--8cab04691e-k8s-whisker--d7568446c--55d6n-eth0" Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.172 [INFO][5601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.178091 containerd[1492]: 2026-03-14 00:13:57.175 [INFO][5594] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7" Mar 14 00:13:57.178091 containerd[1492]: time="2026-03-14T00:13:57.178053525Z" level=info msg="TearDown network for sandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" successfully" Mar 14 00:13:57.181880 containerd[1492]: time="2026-03-14T00:13:57.181819580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:57.182145 containerd[1492]: time="2026-03-14T00:13:57.181898621Z" level=info msg="RemovePodSandbox \"f5aa3ba41efa8f171f0345458310cbcb6af5adc62736cdc7246e92b419b20bf7\" returns successfully" Mar 14 00:13:57.182414 containerd[1492]: time="2026-03-14T00:13:57.182382708Z" level=info msg="StopPodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\"" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.228 [WARNING][5615] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d", Pod:"goldmane-cccfbd5cf-pr7sg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif69d9053f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.229 [INFO][5615] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.229 [INFO][5615] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" iface="eth0" netns="" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.229 [INFO][5615] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.229 [INFO][5615] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.252 [INFO][5622] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.252 [INFO][5622] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.253 [INFO][5622] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.264 [WARNING][5622] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.264 [INFO][5622] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.267 [INFO][5622] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.278932 containerd[1492]: 2026-03-14 00:13:57.273 [INFO][5615] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.279369 containerd[1492]: time="2026-03-14T00:13:57.278985633Z" level=info msg="TearDown network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" successfully" Mar 14 00:13:57.279369 containerd[1492]: time="2026-03-14T00:13:57.279036074Z" level=info msg="StopPodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" returns successfully" Mar 14 00:13:57.279944 containerd[1492]: time="2026-03-14T00:13:57.279886967Z" level=info msg="RemovePodSandbox for \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\"" Mar 14 00:13:57.280008 containerd[1492]: time="2026-03-14T00:13:57.279965728Z" level=info msg="Forcibly stopping sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\"" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.339 [WARNING][5637] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"cf9c5ce0-11b8-40fd-9752-8b6c4229fbea", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"5ebef6033fdb10c7c129a7e38772bb8f1492e9aa58dfc72e272bf59774fa864d", Pod:"goldmane-cccfbd5cf-pr7sg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif69d9053f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.339 [INFO][5637] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.339 [INFO][5637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" iface="eth0" netns="" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.339 [INFO][5637] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.339 [INFO][5637] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.382 [INFO][5650] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.382 [INFO][5650] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.382 [INFO][5650] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.395 [WARNING][5650] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.396 [INFO][5650] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" HandleID="k8s-pod-network.ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Workload="ci--4081--3--6--n--8cab04691e-k8s-goldmane--cccfbd5cf--pr7sg-eth0" Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.398 [INFO][5650] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.407820 containerd[1492]: 2026-03-14 00:13:57.404 [INFO][5637] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411" Mar 14 00:13:57.407820 containerd[1492]: time="2026-03-14T00:13:57.406656851Z" level=info msg="TearDown network for sandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" successfully" Mar 14 00:13:57.422836 containerd[1492]: time="2026-03-14T00:13:57.421725390Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:57.424710 containerd[1492]: time="2026-03-14T00:13:57.424105785Z" level=info msg="RemovePodSandbox \"ee5dad2c5a8460a2508bb883394fe00c6e29f8abafcc66709da0776bded57411\" returns successfully" Mar 14 00:13:57.424936 containerd[1492]: time="2026-03-14T00:13:57.424901876Z" level=info msg="StopPodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\"" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.490 [WARNING][5682] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f", Pod:"csi-node-driver-4k969", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09dfdca86cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.490 [INFO][5682] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.490 [INFO][5682] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" iface="eth0" netns="" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.490 [INFO][5682] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.490 [INFO][5682] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.519 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.520 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.520 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.536 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.536 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.540 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.547506 containerd[1492]: 2026-03-14 00:13:57.545 [INFO][5682] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.548114 containerd[1492]: time="2026-03-14T00:13:57.547634542Z" level=info msg="TearDown network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" successfully" Mar 14 00:13:57.548114 containerd[1492]: time="2026-03-14T00:13:57.547690063Z" level=info msg="StopPodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" returns successfully" Mar 14 00:13:57.548855 containerd[1492]: time="2026-03-14T00:13:57.548323832Z" level=info msg="RemovePodSandbox for \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\"" Mar 14 00:13:57.548855 containerd[1492]: time="2026-03-14T00:13:57.548372313Z" level=info msg="Forcibly stopping sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\"" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.611 [WARNING][5703] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c2769e1-ca6c-48f2-909e-e2592f4d7c1e", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 13, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-8cab04691e", ContainerID:"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f", Pod:"csi-node-driver-4k969", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09dfdca86cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.612 [INFO][5703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.612 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" iface="eth0" netns="" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.612 [INFO][5703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.612 [INFO][5703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.642 [INFO][5710] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.643 [INFO][5710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.646 [INFO][5710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.657 [WARNING][5710] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.657 [INFO][5710] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" HandleID="k8s-pod-network.2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Workload="ci--4081--3--6--n--8cab04691e-k8s-csi--node--driver--4k969-eth0" Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.660 [INFO][5710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:13:57.667778 containerd[1492]: 2026-03-14 00:13:57.664 [INFO][5703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949" Mar 14 00:13:57.667778 containerd[1492]: time="2026-03-14T00:13:57.667614488Z" level=info msg="TearDown network for sandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" successfully" Mar 14 00:13:57.677786 containerd[1492]: time="2026-03-14T00:13:57.677730275Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:13:57.678030 containerd[1492]: time="2026-03-14T00:13:57.677815156Z" level=info msg="RemovePodSandbox \"2b2fdffeb1410ea0795d9af7c2a8b738855439cc1d14e5e6404591abc66b7949\" returns successfully" Mar 14 00:13:57.686967 containerd[1492]: time="2026-03-14T00:13:57.686902529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:57.688617 containerd[1492]: time="2026-03-14T00:13:57.688560433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 14 00:13:57.689481 containerd[1492]: time="2026-03-14T00:13:57.689431125Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:57.692943 containerd[1492]: time="2026-03-14T00:13:57.692546491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:57.693890 containerd[1492]: time="2026-03-14T00:13:57.693436664Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.581899809s" Mar 14 00:13:57.693890 containerd[1492]: time="2026-03-14T00:13:57.693472984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 14 00:13:57.695562 containerd[1492]: time="2026-03-14T00:13:57.695530734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:13:57.700058 containerd[1492]: time="2026-03-14T00:13:57.700021199Z" level=info msg="CreateContainer within sandbox \"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:13:57.720993 containerd[1492]: time="2026-03-14T00:13:57.720911063Z" level=info msg="CreateContainer within sandbox \"3c51c5c41641d4a1b41e56231ed8d699eac4465b7345b58a430cf1b5becd1b3f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d29a09984a1c44382651a5712a3040116a379e67c484034d0dacdb49a5dc2d06\"" Mar 14 00:13:57.722662 containerd[1492]: time="2026-03-14T00:13:57.721727955Z" level=info msg="StartContainer for \"d29a09984a1c44382651a5712a3040116a379e67c484034d0dacdb49a5dc2d06\"" Mar 14 00:13:57.762480 systemd[1]: Started cri-containerd-d29a09984a1c44382651a5712a3040116a379e67c484034d0dacdb49a5dc2d06.scope - libcontainer container d29a09984a1c44382651a5712a3040116a379e67c484034d0dacdb49a5dc2d06. Mar 14 00:13:57.796354 containerd[1492]: time="2026-03-14T00:13:57.795033262Z" level=info msg="StartContainer for \"d29a09984a1c44382651a5712a3040116a379e67c484034d0dacdb49a5dc2d06\" returns successfully" Mar 14 00:13:58.029753 kubelet[2624]: I0314 00:13:58.029650 2624 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:13:58.033643 kubelet[2624]: I0314 00:13:58.033612 2624 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:13:58.349308 kubelet[2624]: I0314 00:13:58.349107 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4k969" podStartSLOduration=28.14621086 podStartE2EDuration="37.349093684s" podCreationTimestamp="2026-03-14 00:13:21 +0000 UTC" firstStartedPulling="2026-03-14 00:13:48.491776417 +0000 UTC m=+52.696870926" lastFinishedPulling="2026-03-14 00:13:57.694659241 +0000 UTC m=+61.899753750" observedRunningTime="2026-03-14 00:13:58.34878688 +0000 UTC m=+62.553881429" watchObservedRunningTime="2026-03-14 00:13:58.349093684 +0000 UTC m=+62.554188193" Mar 14 00:14:00.904338 containerd[1492]: time="2026-03-14T00:14:00.903697267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:00.907836 containerd[1492]: time="2026-03-14T00:14:00.906629388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 14 00:14:00.909588 containerd[1492]: time="2026-03-14T00:14:00.909538990Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:00.912847 containerd[1492]: time="2026-03-14T00:14:00.912806316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:00.914302 containerd[1492]: time="2026-03-14T00:14:00.913492446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.21778635s" Mar 14 00:14:00.914302 containerd[1492]: time="2026-03-14T00:14:00.913527806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 14 00:14:00.930095 containerd[1492]: time="2026-03-14T00:14:00.930054761Z" level=info msg="CreateContainer within sandbox \"381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:14:00.949311 containerd[1492]: time="2026-03-14T00:14:00.948815068Z" level=info msg="CreateContainer within sandbox \"381bdf40cb1c62db0d898f245df5a0657b1ac88922d2a851529c5b3aee96c20e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2ca52171cf2ed176899d6f00438ca1adb29ba7a942c1bce6935f6518f00a99a1\"" Mar 14 00:14:00.950761 containerd[1492]: time="2026-03-14T00:14:00.950615614Z" level=info msg="StartContainer for \"2ca52171cf2ed176899d6f00438ca1adb29ba7a942c1bce6935f6518f00a99a1\"" Mar 14 00:14:00.987471 systemd[1]: Started cri-containerd-2ca52171cf2ed176899d6f00438ca1adb29ba7a942c1bce6935f6518f00a99a1.scope - libcontainer container 2ca52171cf2ed176899d6f00438ca1adb29ba7a942c1bce6935f6518f00a99a1. Mar 14 00:14:01.042654 containerd[1492]: time="2026-03-14T00:14:01.042561518Z" level=info msg="StartContainer for \"2ca52171cf2ed176899d6f00438ca1adb29ba7a942c1bce6935f6518f00a99a1\" returns successfully" Mar 14 00:14:01.418516 kubelet[2624]: I0314 00:14:01.418438 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77bdccb5d5-c59xx" podStartSLOduration=30.93249892 podStartE2EDuration="40.418419666s" podCreationTimestamp="2026-03-14 00:13:21 +0000 UTC" firstStartedPulling="2026-03-14 00:13:51.428435912 +0000 UTC m=+55.633530381" lastFinishedPulling="2026-03-14 00:14:00.914356618 +0000 UTC m=+65.119451127" observedRunningTime="2026-03-14 00:14:01.375067254 +0000 UTC m=+65.580161763" watchObservedRunningTime="2026-03-14 00:14:01.418419666 +0000 UTC m=+65.623514175" Mar 14 00:14:27.348855 systemd[1]: run-containerd-runc-k8s.io-c831ed36b833faa31d11d174a78f9b2676fee541c958765b1f5743955bb593bc-runc.oEuES1.mount: Deactivated successfully. Mar 14 00:15:07.358620 systemd[1]: Started sshd@9-188.245.55.47:22-118.145.184.208:34524.service - OpenSSH per-connection server daemon (118.145.184.208:34524). Mar 14 00:15:33.819691 systemd[1]: Started sshd@10-188.245.55.47:22-68.220.241.50:44774.service - OpenSSH per-connection server daemon (68.220.241.50:44774). Mar 14 00:15:34.410144 sshd[6168]: Accepted publickey for core from 68.220.241.50 port 44774 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:15:34.413453 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:34.419428 systemd-logind[1460]: New session 8 of user core. Mar 14 00:15:34.421532 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:15:34.913201 sshd[6168]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:34.917888 systemd-logind[1460]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:15:34.918196 systemd[1]: sshd@10-188.245.55.47:22-68.220.241.50:44774.service: Deactivated successfully. Mar 14 00:15:34.921074 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:15:34.923546 systemd-logind[1460]: Removed session 8. Mar 14 00:15:40.021857 systemd[1]: Started sshd@11-188.245.55.47:22-68.220.241.50:44788.service - OpenSSH per-connection server daemon (68.220.241.50:44788). Mar 14 00:15:40.610369 sshd[6221]: Accepted publickey for core from 68.220.241.50 port 44788 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:15:40.612095 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:40.618210 systemd-logind[1460]: New session 9 of user core. Mar 14 00:15:40.627593 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:15:41.105784 sshd[6221]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:41.111806 systemd[1]: sshd@11-188.245.55.47:22-68.220.241.50:44788.service: Deactivated successfully. Mar 14 00:15:41.115213 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:15:41.116348 systemd-logind[1460]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:15:41.118836 systemd-logind[1460]: Removed session 9. Mar 14 00:15:46.218597 systemd[1]: Started sshd@12-188.245.55.47:22-68.220.241.50:54958.service - OpenSSH per-connection server daemon (68.220.241.50:54958). Mar 14 00:15:46.802550 sshd[6236]: Accepted publickey for core from 68.220.241.50 port 54958 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:15:46.805016 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:46.812478 systemd-logind[1460]: New session 10 of user core. Mar 14 00:15:46.818660 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:15:47.293441 sshd[6236]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:47.297500 systemd-logind[1460]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:15:47.297891 systemd[1]: sshd@12-188.245.55.47:22-68.220.241.50:54958.service: Deactivated successfully. Mar 14 00:15:47.302527 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:15:47.307172 systemd-logind[1460]: Removed session 10. Mar 14 00:15:52.405621 systemd[1]: Started sshd@13-188.245.55.47:22-68.220.241.50:50160.service - OpenSSH per-connection server daemon (68.220.241.50:50160). Mar 14 00:15:52.990172 sshd[6249]: Accepted publickey for core from 68.220.241.50 port 50160 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:15:52.992977 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:53.001412 systemd-logind[1460]: New session 11 of user core. Mar 14 00:15:53.009015 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:15:53.480336 sshd[6249]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:53.489200 systemd[1]: sshd@13-188.245.55.47:22-68.220.241.50:50160.service: Deactivated successfully. Mar 14 00:15:53.492157 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:15:53.493782 systemd-logind[1460]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:15:53.495834 systemd-logind[1460]: Removed session 11. Mar 14 00:15:53.594560 systemd[1]: Started sshd@14-188.245.55.47:22-68.220.241.50:50170.service - OpenSSH per-connection server daemon (68.220.241.50:50170). Mar 14 00:15:54.178379 sshd[6269]: Accepted publickey for core from 68.220.241.50 port 50170 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:15:54.179512 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:54.184703 systemd-logind[1460]: New session 12 of user core. Mar 14 00:15:54.190406 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:15:54.719980 sshd[6269]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:54.726151 systemd[1]: sshd@14-188.245.55.47:22-68.220.241.50:50170.service: Deactivated successfully. Mar 14 00:15:54.729013 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:15:54.730958 systemd-logind[1460]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:15:54.733008 systemd-logind[1460]: Removed session 12. Mar 14 00:15:54.828508 systemd[1]: Started sshd@15-188.245.55.47:22-68.220.241.50:50184.service - OpenSSH per-connection server daemon (68.220.241.50:50184). Mar 14 00:15:55.414717 sshd[6293]: Accepted publickey for core from 68.220.241.50 port 50184 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:15:55.418236 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:55.424540 systemd-logind[1460]: New session 13 of user core. Mar 14 00:15:55.428665 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:15:55.906654 sshd[6293]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:55.912490 systemd-logind[1460]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:15:55.913064 systemd[1]: sshd@15-188.245.55.47:22-68.220.241.50:50184.service: Deactivated successfully. Mar 14 00:15:55.916223 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:15:55.917854 systemd-logind[1460]: Removed session 13. Mar 14 00:16:01.017887 systemd[1]: Started sshd@16-188.245.55.47:22-68.220.241.50:50192.service - OpenSSH per-connection server daemon (68.220.241.50:50192). Mar 14 00:16:01.601325 sshd[6348]: Accepted publickey for core from 68.220.241.50 port 50192 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:01.602770 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:01.609145 systemd-logind[1460]: New session 14 of user core. Mar 14 00:16:01.614546 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:16:02.097492 sshd[6348]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:02.102383 systemd[1]: sshd@16-188.245.55.47:22-68.220.241.50:50192.service: Deactivated successfully. Mar 14 00:16:02.106465 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:16:02.108488 systemd-logind[1460]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:16:02.109838 systemd-logind[1460]: Removed session 14. Mar 14 00:16:02.207659 systemd[1]: Started sshd@17-188.245.55.47:22-68.220.241.50:47834.service - OpenSSH per-connection server daemon (68.220.241.50:47834). Mar 14 00:16:02.790827 sshd[6379]: Accepted publickey for core from 68.220.241.50 port 47834 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:02.793565 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:02.798927 systemd-logind[1460]: New session 15 of user core. Mar 14 00:16:02.804474 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:16:03.417414 sshd[6379]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:03.422609 systemd[1]: sshd@17-188.245.55.47:22-68.220.241.50:47834.service: Deactivated successfully. Mar 14 00:16:03.425752 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:16:03.429543 systemd-logind[1460]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:16:03.432526 systemd-logind[1460]: Removed session 15. Mar 14 00:16:03.526733 systemd[1]: Started sshd@18-188.245.55.47:22-68.220.241.50:47844.service - OpenSSH per-connection server daemon (68.220.241.50:47844). Mar 14 00:16:04.109528 sshd[6391]: Accepted publickey for core from 68.220.241.50 port 47844 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:04.111751 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:04.117704 systemd-logind[1460]: New session 16 of user core. Mar 14 00:16:04.123567 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:16:05.245599 sshd[6391]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:05.252215 systemd-logind[1460]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:16:05.252934 systemd[1]: sshd@18-188.245.55.47:22-68.220.241.50:47844.service: Deactivated successfully. Mar 14 00:16:05.255596 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:16:05.260166 systemd-logind[1460]: Removed session 16. Mar 14 00:16:05.356788 systemd[1]: Started sshd@19-188.245.55.47:22-68.220.241.50:47846.service - OpenSSH per-connection server daemon (68.220.241.50:47846). Mar 14 00:16:05.945775 sshd[6419]: Accepted publickey for core from 68.220.241.50 port 47846 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:05.947408 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:05.956870 systemd-logind[1460]: New session 17 of user core. Mar 14 00:16:05.962615 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:16:06.571413 sshd[6419]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:06.577104 systemd[1]: sshd@19-188.245.55.47:22-68.220.241.50:47846.service: Deactivated successfully. Mar 14 00:16:06.580898 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:16:06.583035 systemd-logind[1460]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:16:06.584708 systemd-logind[1460]: Removed session 17. Mar 14 00:16:06.684831 systemd[1]: Started sshd@20-188.245.55.47:22-68.220.241.50:47858.service - OpenSSH per-connection server daemon (68.220.241.50:47858). Mar 14 00:16:07.270326 sshd[6432]: Accepted publickey for core from 68.220.241.50 port 47858 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:07.271886 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:07.277333 systemd-logind[1460]: New session 18 of user core. Mar 14 00:16:07.284615 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:16:07.759639 sshd[6432]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:07.764371 systemd[1]: sshd@20-188.245.55.47:22-68.220.241.50:47858.service: Deactivated successfully. Mar 14 00:16:07.768101 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:16:07.769645 systemd-logind[1460]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:16:07.770809 systemd-logind[1460]: Removed session 18. Mar 14 00:16:12.872637 systemd[1]: Started sshd@21-188.245.55.47:22-68.220.241.50:44560.service - OpenSSH per-connection server daemon (68.220.241.50:44560). Mar 14 00:16:13.455361 sshd[6468]: Accepted publickey for core from 68.220.241.50 port 44560 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:13.458699 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:13.465085 systemd-logind[1460]: New session 19 of user core. Mar 14 00:16:13.469611 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:16:13.946238 sshd[6468]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:13.951700 systemd-logind[1460]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:16:13.952953 systemd[1]: sshd@21-188.245.55.47:22-68.220.241.50:44560.service: Deactivated successfully. Mar 14 00:16:13.955973 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:16:13.957686 systemd-logind[1460]: Removed session 19. Mar 14 00:16:19.056717 systemd[1]: Started sshd@22-188.245.55.47:22-68.220.241.50:44566.service - OpenSSH per-connection server daemon (68.220.241.50:44566). Mar 14 00:16:19.640133 sshd[6490]: Accepted publickey for core from 68.220.241.50 port 44566 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:16:19.642928 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:19.647936 systemd-logind[1460]: New session 20 of user core. Mar 14 00:16:19.655459 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:16:20.138121 sshd[6490]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:20.142334 systemd-logind[1460]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:16:20.142489 systemd[1]: sshd@22-188.245.55.47:22-68.220.241.50:44566.service: Deactivated successfully. Mar 14 00:16:20.144520 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:16:20.146886 systemd-logind[1460]: Removed session 20. Mar 14 00:16:34.958268 kubelet[2624]: E0314 00:16:34.957681 2624 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42342->10.0.0.2:2379: read: connection timed out" Mar 14 00:16:35.262023 systemd[1]: cri-containerd-0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374.scope: Deactivated successfully. Mar 14 00:16:35.262605 systemd[1]: cri-containerd-0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374.scope: Consumed 5.599s CPU time, 20.1M memory peak, 0B memory swap peak. Mar 14 00:16:35.289270 containerd[1492]: time="2026-03-14T00:16:35.289207172Z" level=info msg="shim disconnected" id=0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374 namespace=k8s.io Mar 14 00:16:35.289270 containerd[1492]: time="2026-03-14T00:16:35.289264413Z" level=warning msg="cleaning up after shim disconnected" id=0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374 namespace=k8s.io Mar 14 00:16:35.289270 containerd[1492]: time="2026-03-14T00:16:35.289285093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:35.290773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374-rootfs.mount: Deactivated successfully. Mar 14 00:16:35.844212 kubelet[2624]: I0314 00:16:35.843815 2624 scope.go:117] "RemoveContainer" containerID="0d645b0ca8d372484dab36fe62e25b0f055f4a28e62e09083ab92e32f85d6374" Mar 14 00:16:35.847246 containerd[1492]: time="2026-03-14T00:16:35.846982109Z" level=info msg="CreateContainer within sandbox \"efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:16:35.863997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676203762.mount: Deactivated successfully. Mar 14 00:16:35.866353 containerd[1492]: time="2026-03-14T00:16:35.866266210Z" level=info msg="CreateContainer within sandbox \"efbf7af38f8b2593276cbb853f2f08ac946f013d383d74f044a1f5a50e7a4f58\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bd66425f2ab42d533b17bfc6464f954046e7a588f8ede44710c34a56a177a3ee\"" Mar 14 00:16:35.866858 containerd[1492]: time="2026-03-14T00:16:35.866799058Z" level=info msg="StartContainer for \"bd66425f2ab42d533b17bfc6464f954046e7a588f8ede44710c34a56a177a3ee\"" Mar 14 00:16:35.910485 systemd[1]: Started cri-containerd-bd66425f2ab42d533b17bfc6464f954046e7a588f8ede44710c34a56a177a3ee.scope - libcontainer container bd66425f2ab42d533b17bfc6464f954046e7a588f8ede44710c34a56a177a3ee. Mar 14 00:16:35.953394 containerd[1492]: time="2026-03-14T00:16:35.953329728Z" level=info msg="StartContainer for \"bd66425f2ab42d533b17bfc6464f954046e7a588f8ede44710c34a56a177a3ee\" returns successfully" Mar 14 00:16:36.289775 systemd[1]: run-containerd-runc-k8s.io-bd66425f2ab42d533b17bfc6464f954046e7a588f8ede44710c34a56a177a3ee-runc.HVbaU3.mount: Deactivated successfully. Mar 14 00:16:36.499116 systemd[1]: cri-containerd-bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9.scope: Deactivated successfully. Mar 14 00:16:36.499868 systemd[1]: cri-containerd-bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9.scope: Consumed 11.456s CPU time. Mar 14 00:16:36.529738 containerd[1492]: time="2026-03-14T00:16:36.529584642Z" level=info msg="shim disconnected" id=bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9 namespace=k8s.io Mar 14 00:16:36.529738 containerd[1492]: time="2026-03-14T00:16:36.529651643Z" level=warning msg="cleaning up after shim disconnected" id=bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9 namespace=k8s.io Mar 14 00:16:36.529738 containerd[1492]: time="2026-03-14T00:16:36.529660043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:36.532362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9-rootfs.mount: Deactivated successfully. Mar 14 00:16:36.847327 kubelet[2624]: I0314 00:16:36.847143 2624 scope.go:117] "RemoveContainer" containerID="bdec9cd4297a89aabe3e817f11659f7774fcb3e81e063b56f2575b1ad1543fb9" Mar 14 00:16:36.850534 containerd[1492]: time="2026-03-14T00:16:36.850432186Z" level=info msg="CreateContainer within sandbox \"083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:16:36.870863 containerd[1492]: time="2026-03-14T00:16:36.870822903Z" level=info msg="CreateContainer within sandbox \"083888e670bdd3f833f2c3ea872331a7d256703ed3cb955089a9940fe42e5366\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3\"" Mar 14 00:16:36.872314 containerd[1492]: time="2026-03-14T00:16:36.871469113Z" level=info msg="StartContainer for \"0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3\"" Mar 14 00:16:36.913558 systemd[1]: Started cri-containerd-0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3.scope - libcontainer container 0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3. Mar 14 00:16:36.953443 containerd[1492]: time="2026-03-14T00:16:36.953394985Z" level=info msg="StartContainer for \"0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3\" returns successfully" Mar 14 00:16:38.988102 kubelet[2624]: E0314 00:16:38.987710 2624 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42006->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-8cab04691e.189c8d00f6236f02 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-8cab04691e,UID:848a31b41504c8c149ae27a777747bd7,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-8cab04691e,},FirstTimestamp:2026-03-14 00:16:28.54616653 +0000 UTC m=+212.751261079,LastTimestamp:2026-03-14 00:16:28.54616653 +0000 UTC m=+212.751261079,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-8cab04691e,}" Mar 14 00:16:40.343588 systemd[1]: cri-containerd-0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3.scope: Deactivated successfully. Mar 14 00:16:40.368018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3-rootfs.mount: Deactivated successfully. Mar 14 00:16:40.375605 containerd[1492]: time="2026-03-14T00:16:40.375380391Z" level=info msg="shim disconnected" id=0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3 namespace=k8s.io Mar 14 00:16:40.375605 containerd[1492]: time="2026-03-14T00:16:40.375442192Z" level=warning msg="cleaning up after shim disconnected" id=0f783699c93b0da4644d9ac43d7e001f834de9a68b67e22b8fba6690189f0ed3 namespace=k8s.io Mar 14 00:16:40.375605 containerd[1492]: time="2026-03-14T00:16:40.375453072Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:40.445222 systemd[1]: cri-containerd-eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a.scope: Deactivated successfully. Mar 14 00:16:40.446406 systemd[1]: cri-containerd-eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a.scope: Consumed 4.208s CPU time, 15.7M memory peak, 0B memory swap peak. Mar 14 00:16:40.473968 containerd[1492]: time="2026-03-14T00:16:40.473229609Z" level=info msg="shim disconnected" id=eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a namespace=k8s.io Mar 14 00:16:40.473968 containerd[1492]: time="2026-03-14T00:16:40.473448972Z" level=warning msg="cleaning up after shim disconnected" id=eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a namespace=k8s.io Mar 14 00:16:40.473968 containerd[1492]: time="2026-03-14T00:16:40.473462492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:40.474855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb892142ec0ab8866fff9c2cee7a9f09d46b7133e42308635df0a5107925ed9a-rootfs.mount: Deactivated successfully.