Mar 14 00:08:11.887607 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 14 00:08:11.887637 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:08:11.887649 kernel: KASLR enabled Mar 14 00:08:11.887655 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 14 00:08:11.887661 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 14 00:08:11.887667 kernel: random: crng init done Mar 14 00:08:11.887675 kernel: ACPI: Early table checksum verification disabled Mar 14 00:08:11.887681 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 14 00:08:11.887688 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 14 00:08:11.887696 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887702 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887709 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887715 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887722 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887730 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887738 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887745 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887752 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:08:11.887759 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 14 00:08:11.887766 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 14 00:08:11.887772 kernel: NUMA: Failed to initialise from firmware Mar 14 00:08:11.887779 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:08:11.887786 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Mar 14 00:08:11.887793 kernel: Zone ranges: Mar 14 00:08:11.887799 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:08:11.887808 kernel: DMA32 empty Mar 14 00:08:11.887814 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 14 00:08:11.887821 kernel: Movable zone start for each node Mar 14 00:08:11.887828 kernel: Early memory node ranges Mar 14 00:08:11.887835 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 14 00:08:11.887842 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 14 00:08:11.887848 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 14 00:08:11.887855 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 14 00:08:11.887862 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 14 00:08:11.887869 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 14 00:08:11.887875 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 14 00:08:11.887882 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 14 00:08:11.887890 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 14 00:08:11.887897 kernel: psci: probing for conduit method from ACPI. Mar 14 00:08:11.887904 kernel: psci: PSCIv1.1 detected in firmware. Mar 14 00:08:11.887914 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:08:11.887921 kernel: psci: Trusted OS migration not required Mar 14 00:08:11.887928 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:08:11.887937 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 14 00:08:11.887945 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:08:11.887952 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:08:11.887959 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:08:11.887966 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:08:11.887974 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:08:11.887981 kernel: CPU features: detected: Hardware dirty bit management Mar 14 00:08:11.887989 kernel: CPU features: detected: Spectre-v4 Mar 14 00:08:11.887996 kernel: CPU features: detected: Spectre-BHB Mar 14 00:08:11.888003 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 14 00:08:11.888012 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 14 00:08:11.888019 kernel: CPU features: detected: ARM erratum 1418040 Mar 14 00:08:11.888026 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 14 00:08:11.888034 kernel: alternatives: applying boot alternatives Mar 14 00:08:11.888042 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:08:11.888050 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:08:11.888057 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:08:11.888064 kernel: Fallback order for Node 0: 0 Mar 14 00:08:11.888071 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 14 00:08:11.888078 kernel: Policy zone: Normal Mar 14 00:08:11.888086 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:08:11.888094 kernel: software IO TLB: area num 2. Mar 14 00:08:11.888102 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 14 00:08:11.888109 kernel: Memory: 3882816K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213184K reserved, 0K cma-reserved) Mar 14 00:08:11.888117 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:08:11.888124 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:08:11.888132 kernel: rcu: RCU event tracing is enabled. Mar 14 00:08:11.888140 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:08:11.888147 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:08:11.888154 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:08:11.888162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:08:11.888169 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:08:11.888176 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:08:11.888185 kernel: GICv3: 256 SPIs implemented Mar 14 00:08:11.888192 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:08:11.888200 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:08:11.888207 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 14 00:08:11.888214 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 14 00:08:11.888221 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 14 00:08:11.888241 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:08:11.888249 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:08:11.888256 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 14 00:08:11.888264 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 14 00:08:11.888271 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:08:11.888281 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:08:11.888289 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 14 00:08:11.888296 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 14 00:08:11.888303 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 14 00:08:11.888311 kernel: Console: colour dummy device 80x25 Mar 14 00:08:11.888318 kernel: ACPI: Core revision 20230628 Mar 14 00:08:11.888326 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 14 00:08:11.888333 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:08:11.888341 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:08:11.888348 kernel: landlock: Up and running. Mar 14 00:08:11.888357 kernel: SELinux: Initializing. Mar 14 00:08:11.888364 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:08:11.888372 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:08:11.888379 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:08:11.888387 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:08:11.888394 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:08:11.888402 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:08:11.888410 kernel: Platform MSI: ITS@0x8080000 domain created Mar 14 00:08:11.888417 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 14 00:08:11.888426 kernel: Remapping and enabling EFI services. Mar 14 00:08:11.888433 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:08:11.888441 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:08:11.888449 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 14 00:08:11.888456 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 14 00:08:11.888464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 14 00:08:11.888471 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 14 00:08:11.888479 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:08:11.888486 kernel: SMP: Total of 2 processors activated. Mar 14 00:08:11.888494 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:08:11.888502 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 14 00:08:11.888510 kernel: CPU features: detected: Common not Private translations Mar 14 00:08:11.888523 kernel: CPU features: detected: CRC32 instructions Mar 14 00:08:11.888542 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 14 00:08:11.888551 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 14 00:08:11.888559 kernel: CPU features: detected: LSE atomic instructions Mar 14 00:08:11.888567 kernel: CPU features: detected: Privileged Access Never Mar 14 00:08:11.888575 kernel: CPU features: detected: RAS Extension Support Mar 14 00:08:11.888585 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 14 00:08:11.888593 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:08:11.888601 kernel: alternatives: applying system-wide alternatives Mar 14 00:08:11.888609 kernel: devtmpfs: initialized Mar 14 00:08:11.888617 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:08:11.888625 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:08:11.888633 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:08:11.888641 kernel: SMBIOS 3.0.0 present. Mar 14 00:08:11.888650 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 14 00:08:11.888659 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:08:11.888666 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:08:11.888674 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:08:11.888682 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:08:11.888690 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:08:11.888698 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Mar 14 00:08:11.888706 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:08:11.888714 kernel: cpuidle: using governor menu Mar 14 00:08:11.888724 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:08:11.888732 kernel: ASID allocator initialised with 32768 entries Mar 14 00:08:11.888739 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:08:11.888747 kernel: Serial: AMBA PL011 UART driver Mar 14 00:08:11.888755 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 14 00:08:11.888763 kernel: Modules: 0 pages in range for non-PLT usage Mar 14 00:08:11.888771 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:08:11.888779 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:08:11.888787 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:08:11.888796 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:08:11.888804 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:08:11.888812 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:08:11.888820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:08:11.888828 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:08:11.888836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:08:11.888843 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:08:11.888851 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:08:11.888859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:08:11.888868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:08:11.888877 kernel: ACPI: Interpreter enabled Mar 14 00:08:11.888884 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:08:11.888892 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:08:11.888900 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 14 00:08:11.888908 kernel: printk: console [ttyAMA0] enabled Mar 14 00:08:11.888916 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:08:11.889060 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:08:11.889140 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:08:11.889210 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:08:11.889293 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 14 00:08:11.889363 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 14 00:08:11.889373 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 14 00:08:11.889381 kernel: PCI host bridge to bus 0000:00 Mar 14 00:08:11.889455 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 14 00:08:11.889518 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:08:11.890656 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 14 00:08:11.890730 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:08:11.890818 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 14 00:08:11.890907 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 14 00:08:11.890978 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 14 00:08:11.891049 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:08:11.891133 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.891204 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 14 00:08:11.891301 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.891375 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 14 00:08:11.891453 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.891523 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 14 00:08:11.891631 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.891715 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 14 00:08:11.891792 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.891863 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 14 00:08:11.891940 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.892009 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 14 00:08:11.892087 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.892164 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 14 00:08:11.892261 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.892333 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 14 00:08:11.892409 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 14 00:08:11.892478 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 14 00:08:11.894649 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 14 00:08:11.894742 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 14 00:08:11.894824 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:08:11.894896 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 14 00:08:11.894967 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:08:11.895036 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:08:11.895112 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 14 00:08:11.895191 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 14 00:08:11.895328 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 14 00:08:11.895408 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 14 00:08:11.895480 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 14 00:08:11.895575 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 14 00:08:11.895650 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 14 00:08:11.895740 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 14 00:08:11.895812 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 14 00:08:11.895882 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 14 00:08:11.895963 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 14 00:08:11.896037 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 14 00:08:11.896109 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:08:11.896196 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 14 00:08:11.896287 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 14 00:08:11.896363 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 14 00:08:11.896435 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 14 00:08:11.896509 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 14 00:08:11.899676 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:08:11.899767 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 14 00:08:11.899853 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 14 00:08:11.899924 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 14 00:08:11.900004 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 14 00:08:11.900079 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 14 00:08:11.900150 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:08:11.900219 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 14 00:08:11.900340 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 14 00:08:11.900415 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 14 00:08:11.900491 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 14 00:08:11.900589 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 14 00:08:11.900662 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:08:11.900732 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 14 00:08:11.900805 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 14 00:08:11.900874 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:08:11.900948 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 14 00:08:11.901026 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 14 00:08:11.901097 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:08:11.901165 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 14 00:08:11.901251 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 14 00:08:11.901325 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:08:11.901394 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 14 00:08:11.901476 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 14 00:08:11.903247 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:08:11.903355 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 14 00:08:11.903428 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 14 00:08:11.903497 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:08:11.903639 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 14 00:08:11.903712 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:08:11.903794 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 14 00:08:11.903870 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:08:11.903950 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 14 00:08:11.904019 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:08:11.904101 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 14 00:08:11.904170 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:08:11.904256 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 14 00:08:11.904328 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:08:11.904403 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 14 00:08:11.904472 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:08:11.904552 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 14 00:08:11.904623 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:08:11.904693 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 14 00:08:11.904762 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:08:11.904836 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 14 00:08:11.904909 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 14 00:08:11.904978 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 14 00:08:11.905047 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 14 00:08:11.905116 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 14 00:08:11.905184 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 14 00:08:11.905293 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 14 00:08:11.905367 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 14 00:08:11.905438 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 14 00:08:11.905511 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 14 00:08:11.907659 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 14 00:08:11.907745 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 14 00:08:11.907818 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 14 00:08:11.907887 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 14 00:08:11.907959 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 14 00:08:11.908028 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 14 00:08:11.908098 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 14 00:08:11.908172 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 14 00:08:11.908289 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 14 00:08:11.908368 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 14 00:08:11.908443 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 14 00:08:11.908521 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 14 00:08:11.908611 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 14 00:08:11.908684 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 14 00:08:11.908753 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 14 00:08:11.908827 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 14 00:08:11.908896 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 14 00:08:11.908964 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:08:11.909040 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 14 00:08:11.909114 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 14 00:08:11.909182 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 14 00:08:11.909265 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 14 00:08:11.909336 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:08:11.909412 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 14 00:08:11.909486 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 14 00:08:11.910095 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 14 00:08:11.910182 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 14 00:08:11.910283 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 14 00:08:11.910356 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:08:11.910436 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 14 00:08:11.910506 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 14 00:08:11.910603 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 14 00:08:11.910676 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 14 00:08:11.910746 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:08:11.910823 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 14 00:08:11.910900 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 14 00:08:11.910971 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 14 00:08:11.911041 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 14 00:08:11.911133 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 14 00:08:11.911205 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:08:11.911329 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 14 00:08:11.911407 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 14 00:08:11.911479 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 14 00:08:11.911651 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 14 00:08:11.911730 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 14 00:08:11.911801 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:08:11.911878 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 14 00:08:11.911949 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 14 00:08:11.912019 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 14 00:08:11.912087 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 14 00:08:11.912155 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 14 00:08:11.912242 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 14 00:08:11.912321 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:08:11.912392 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 14 00:08:11.912458 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 14 00:08:11.912527 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 14 00:08:11.912733 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:08:11.912806 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 14 00:08:11.912875 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 14 00:08:11.912950 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 14 00:08:11.913019 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:08:11.913088 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 14 00:08:11.913149 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:08:11.913211 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 14 00:08:11.913334 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 14 00:08:11.913404 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 14 00:08:11.913474 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 14 00:08:11.913604 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 14 00:08:11.913678 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 14 00:08:11.913741 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 14 00:08:11.913811 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 14 00:08:11.913873 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 14 00:08:11.913939 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 14 00:08:11.914009 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 14 00:08:11.914074 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 14 00:08:11.914151 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 14 00:08:11.914221 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 14 00:08:11.914304 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 14 00:08:11.914367 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 14 00:08:11.914441 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 14 00:08:11.914503 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 14 00:08:11.915304 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 14 00:08:11.915401 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 14 00:08:11.915475 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 14 00:08:11.915624 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 14 00:08:11.915718 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 14 00:08:11.915783 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 14 00:08:11.915845 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 14 00:08:11.915914 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 14 00:08:11.915978 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 14 00:08:11.916046 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 14 00:08:11.916056 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:08:11.916065 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:08:11.916073 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:08:11.916084 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:08:11.916092 kernel: iommu: Default domain type: Translated Mar 14 00:08:11.916101 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:08:11.916109 kernel: efivars: Registered efivars operations Mar 14 00:08:11.916117 kernel: vgaarb: loaded Mar 14 00:08:11.916127 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:08:11.916135 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:08:11.916144 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:08:11.916152 kernel: pnp: PnP ACPI init Mar 14 00:08:11.916237 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 14 00:08:11.916250 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:08:11.916259 kernel: NET: Registered PF_INET protocol family Mar 14 00:08:11.916268 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:08:11.916279 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:08:11.916288 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:08:11.916296 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:08:11.916305 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:08:11.916314 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:08:11.916322 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:08:11.916331 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:08:11.916339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:08:11.916425 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 14 00:08:11.916440 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:08:11.916449 kernel: kvm [1]: HYP mode not available Mar 14 00:08:11.916457 kernel: Initialise system trusted keyrings Mar 14 00:08:11.916465 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:08:11.916474 kernel: Key type asymmetric registered Mar 14 00:08:11.916482 kernel: Asymmetric key parser 'x509' registered Mar 14 00:08:11.916490 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:08:11.916499 kernel: io scheduler mq-deadline registered Mar 14 00:08:11.916507 kernel: io scheduler kyber registered Mar 14 00:08:11.916517 kernel: io scheduler bfq registered Mar 14 00:08:11.916526 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:08:11.916614 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 14 00:08:11.916686 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 14 00:08:11.916755 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.916826 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 14 00:08:11.916896 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 14 00:08:11.916969 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.917041 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 14 00:08:11.917111 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 14 00:08:11.917180 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.917266 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 14 00:08:11.917338 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 14 00:08:11.917410 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.917482 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 14 00:08:11.918175 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 14 00:08:11.918321 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.918401 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 14 00:08:11.918471 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 14 00:08:11.918567 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.918644 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 14 00:08:11.918715 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 14 00:08:11.918784 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.918855 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 14 00:08:11.918924 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 14 00:08:11.918997 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.919009 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 14 00:08:11.919078 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 14 00:08:11.919147 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 14 00:08:11.919217 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 14 00:08:11.919245 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:08:11.919258 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:08:11.919268 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:08:11.919350 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 14 00:08:11.919429 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 14 00:08:11.919441 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:08:11.919452 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:08:11.919524 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 14 00:08:11.919563 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 14 00:08:11.919572 kernel: thunder_xcv, ver 1.0 Mar 14 00:08:11.919584 kernel: thunder_bgx, ver 1.0 Mar 14 00:08:11.919592 kernel: nicpf, ver 1.0 Mar 14 00:08:11.919601 kernel: nicvf, ver 1.0 Mar 14 00:08:11.919688 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:08:11.919756 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:08:11 UTC (1773446891) Mar 14 00:08:11.919767 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:08:11.919776 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 14 00:08:11.919785 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:08:11.919796 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:08:11.919804 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:08:11.919813 kernel: Segment Routing with IPv6 Mar 14 00:08:11.919821 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:08:11.919829 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:08:11.919838 kernel: Key type dns_resolver registered Mar 14 00:08:11.919846 kernel: registered taskstats version 1 Mar 14 00:08:11.919854 kernel: Loading compiled-in X.509 certificates Mar 14 00:08:11.919862 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:08:11.919872 kernel: Key type .fscrypt registered Mar 14 00:08:11.919880 kernel: Key type fscrypt-provisioning registered Mar 14 00:08:11.919889 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:08:11.919897 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:08:11.919905 kernel: ima: No architecture policies found Mar 14 00:08:11.919914 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:08:11.919922 kernel: clk: Disabling unused clocks Mar 14 00:08:11.919931 kernel: Freeing unused kernel memory: 39424K Mar 14 00:08:11.919939 kernel: Run /init as init process Mar 14 00:08:11.919948 kernel: with arguments: Mar 14 00:08:11.919957 kernel: /init Mar 14 00:08:11.919965 kernel: with environment: Mar 14 00:08:11.919973 kernel: HOME=/ Mar 14 00:08:11.919981 kernel: TERM=linux Mar 14 00:08:11.919991 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:08:11.920003 systemd[1]: Detected virtualization kvm. Mar 14 00:08:11.920012 systemd[1]: Detected architecture arm64. Mar 14 00:08:11.920022 systemd[1]: Running in initrd. Mar 14 00:08:11.920031 systemd[1]: No hostname configured, using default hostname. Mar 14 00:08:11.920039 systemd[1]: Hostname set to . Mar 14 00:08:11.920049 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:08:11.920058 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:08:11.920066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:08:11.920075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:08:11.920085 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:08:11.920095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:08:11.920104 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:08:11.920114 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:08:11.920124 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:08:11.920133 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:08:11.920142 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:08:11.920150 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:08:11.920161 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:08:11.920170 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:08:11.920178 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:08:11.920187 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:08:11.920196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:08:11.920205 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:08:11.920215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:08:11.920225 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:08:11.920247 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:08:11.920256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:08:11.920265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:08:11.920274 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:08:11.920283 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:08:11.920292 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:08:11.920301 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:08:11.920309 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:08:11.920318 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:08:11.920329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:08:11.920338 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:08:11.920346 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:08:11.920355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:08:11.920365 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:08:11.920394 systemd-journald[237]: Collecting audit messages is disabled. Mar 14 00:08:11.920418 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:08:11.920427 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:08:11.920438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:08:11.920448 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:08:11.920457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:08:11.920466 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:08:11.920474 kernel: Bridge firewalling registered Mar 14 00:08:11.920483 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:08:11.920492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:08:11.920502 systemd-journald[237]: Journal started Mar 14 00:08:11.920526 systemd-journald[237]: Runtime Journal (/run/log/journal/e6da5a8dcf3e4b72afc54e5dea639fab) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:08:11.879065 systemd-modules-load[238]: Inserted module 'overlay' Mar 14 00:08:11.921721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:08:11.907444 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 14 00:08:11.925564 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:08:11.931878 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:08:11.945981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:08:11.951224 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:08:11.952904 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:08:11.960481 dracut-cmdline[266]: dracut-dracut-053 Mar 14 00:08:11.966161 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:08:11.977881 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:08:11.988098 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:08:12.010306 systemd-resolved[296]: Positive Trust Anchors: Mar 14 00:08:12.011076 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:08:12.011112 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:08:12.021114 systemd-resolved[296]: Defaulting to hostname 'linux'. Mar 14 00:08:12.022706 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:08:12.023962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:08:12.034578 kernel: SCSI subsystem initialized Mar 14 00:08:12.039579 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:08:12.046575 kernel: iscsi: registered transport (tcp) Mar 14 00:08:12.061567 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:08:12.061619 kernel: QLogic iSCSI HBA Driver Mar 14 00:08:12.107731 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:08:12.112697 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:08:12.133809 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:08:12.133910 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:08:12.133953 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:08:12.200549 kernel: raid6: neonx8 gen() 15665 MB/s Mar 14 00:08:12.201630 kernel: raid6: neonx4 gen() 15421 MB/s Mar 14 00:08:12.218599 kernel: raid6: neonx2 gen() 13035 MB/s Mar 14 00:08:12.235578 kernel: raid6: neonx1 gen() 10372 MB/s Mar 14 00:08:12.252609 kernel: raid6: int64x8 gen() 6875 MB/s Mar 14 00:08:12.269584 kernel: raid6: int64x4 gen() 7280 MB/s Mar 14 00:08:12.286607 kernel: raid6: int64x2 gen() 6038 MB/s Mar 14 00:08:12.303624 kernel: raid6: int64x1 gen() 5027 MB/s Mar 14 00:08:12.303715 kernel: raid6: using algorithm neonx8 gen() 15665 MB/s Mar 14 00:08:12.320591 kernel: raid6: .... xor() 11724 MB/s, rmw enabled Mar 14 00:08:12.320641 kernel: raid6: using neon recovery algorithm Mar 14 00:08:12.325566 kernel: xor: measuring software checksum speed Mar 14 00:08:12.325634 kernel: 8regs : 19754 MB/sec Mar 14 00:08:12.325659 kernel: 32regs : 17466 MB/sec Mar 14 00:08:12.326652 kernel: arm64_neon : 27105 MB/sec Mar 14 00:08:12.326686 kernel: xor: using function: arm64_neon (27105 MB/sec) Mar 14 00:08:12.377577 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:08:12.392292 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:08:12.400800 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:08:12.413439 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 14 00:08:12.416804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:08:12.426918 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:08:12.440379 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Mar 14 00:08:12.476674 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:08:12.483772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:08:12.534398 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:08:12.543378 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:08:12.563320 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:08:12.566770 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:08:12.568691 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:08:12.571292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:08:12.578188 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:08:12.601419 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:08:12.659287 kernel: scsi host0: Virtio SCSI HBA Mar 14 00:08:12.665108 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:08:12.665201 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 14 00:08:12.682605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:08:12.682731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:08:12.685305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:08:12.686030 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:08:12.690739 kernel: ACPI: bus type USB registered Mar 14 00:08:12.690764 kernel: usbcore: registered new interface driver usbfs Mar 14 00:08:12.690775 kernel: usbcore: registered new interface driver hub Mar 14 00:08:12.686184 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:08:12.687734 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:08:12.693175 kernel: usbcore: registered new device driver usb Mar 14 00:08:12.697859 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:08:12.715324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:08:12.718625 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 14 00:08:12.718816 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 14 00:08:12.719654 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:08:12.721950 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:08:12.728613 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:08:12.728804 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 14 00:08:12.727828 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:08:12.730564 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 14 00:08:12.732021 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 14 00:08:12.732174 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 14 00:08:12.733380 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 14 00:08:12.734558 kernel: hub 1-0:1.0: USB hub found Mar 14 00:08:12.734730 kernel: hub 1-0:1.0: 4 ports detected Mar 14 00:08:12.736557 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 14 00:08:12.738716 kernel: hub 2-0:1.0: USB hub found Mar 14 00:08:12.738876 kernel: hub 2-0:1.0: 4 ports detected Mar 14 00:08:12.741564 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 14 00:08:12.741746 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 14 00:08:12.741860 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 14 00:08:12.742937 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 14 00:08:12.743073 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 14 00:08:12.749772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:08:12.749814 kernel: GPT:17805311 != 80003071 Mar 14 00:08:12.749827 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:08:12.749840 kernel: GPT:17805311 != 80003071 Mar 14 00:08:12.749859 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:08:12.750606 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:08:12.751597 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 14 00:08:12.756858 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:08:12.782566 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (504) Mar 14 00:08:12.790251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 14 00:08:12.803548 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (526) Mar 14 00:08:12.805721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:08:12.812248 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 14 00:08:12.822250 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 14 00:08:12.824421 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 14 00:08:12.831736 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:08:12.838943 disk-uuid[574]: Primary Header is updated. Mar 14 00:08:12.838943 disk-uuid[574]: Secondary Entries is updated. Mar 14 00:08:12.838943 disk-uuid[574]: Secondary Header is updated. Mar 14 00:08:12.844558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:08:12.972571 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 14 00:08:13.108584 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 14 00:08:13.108681 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 14 00:08:13.109916 kernel: usbcore: registered new interface driver usbhid Mar 14 00:08:13.109972 kernel: usbhid: USB HID core driver Mar 14 00:08:13.218602 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 14 00:08:13.348598 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 14 00:08:13.402583 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 14 00:08:13.861572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 14 00:08:13.862788 disk-uuid[575]: The operation has completed successfully. Mar 14 00:08:13.911247 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:08:13.911384 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:08:13.929726 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:08:13.945340 sh[592]: Success Mar 14 00:08:13.960607 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:08:14.008956 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:08:14.018710 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:08:14.022562 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:08:14.036349 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:08:14.036420 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:08:14.036440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:08:14.037824 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:08:14.037850 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:08:14.044565 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:08:14.046832 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:08:14.048790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:08:14.054766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:08:14.058761 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:08:14.073664 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:08:14.073717 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:08:14.074552 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:08:14.078621 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:08:14.078707 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:08:14.089591 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:08:14.089518 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:08:14.097369 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:08:14.105920 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:08:14.190571 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:08:14.200744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:08:14.203609 ignition[686]: Ignition 2.19.0 Mar 14 00:08:14.203621 ignition[686]: Stage: fetch-offline Mar 14 00:08:14.203662 ignition[686]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:14.203671 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:14.206963 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:08:14.203834 ignition[686]: parsed url from cmdline: "" Mar 14 00:08:14.203837 ignition[686]: no config URL provided Mar 14 00:08:14.203842 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:08:14.203849 ignition[686]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:08:14.203854 ignition[686]: failed to fetch config: resource requires networking Mar 14 00:08:14.204032 ignition[686]: Ignition finished successfully Mar 14 00:08:14.224786 systemd-networkd[779]: lo: Link UP Mar 14 00:08:14.224796 systemd-networkd[779]: lo: Gained carrier Mar 14 00:08:14.226456 systemd-networkd[779]: Enumeration completed Mar 14 00:08:14.226682 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:08:14.227696 systemd[1]: Reached target network.target - Network. Mar 14 00:08:14.228498 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:14.228501 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:08:14.229314 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:14.229317 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:08:14.229837 systemd-networkd[779]: eth0: Link UP Mar 14 00:08:14.229840 systemd-networkd[779]: eth0: Gained carrier Mar 14 00:08:14.229850 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:14.235969 systemd-networkd[779]: eth1: Link UP Mar 14 00:08:14.235972 systemd-networkd[779]: eth1: Gained carrier Mar 14 00:08:14.235984 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:14.236983 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:08:14.249121 ignition[782]: Ignition 2.19.0 Mar 14 00:08:14.249131 ignition[782]: Stage: fetch Mar 14 00:08:14.249340 ignition[782]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:14.249351 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:14.249446 ignition[782]: parsed url from cmdline: "" Mar 14 00:08:14.249450 ignition[782]: no config URL provided Mar 14 00:08:14.249454 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:08:14.249462 ignition[782]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:08:14.249481 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 14 00:08:14.250339 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 14 00:08:14.279639 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:08:14.301651 systemd-networkd[779]: eth0: DHCPv4 address 46.224.38.228/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:08:14.451020 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 14 00:08:14.459230 ignition[782]: GET result: OK Mar 14 00:08:14.459376 ignition[782]: parsing config with SHA512: 3604522dd38a026d519d688e1ba4c668e5ef4f7a5d5f5577e5fe01db22a75e8ffc2e030a399bc76451f7bff573005465b56b0a5144cbee9519a69e2a3f735fe5 Mar 14 00:08:14.464494 unknown[782]: fetched base config from "system" Mar 14 00:08:14.464503 unknown[782]: fetched base config from "system" Mar 14 00:08:14.464871 ignition[782]: fetch: fetch complete Mar 14 00:08:14.464510 unknown[782]: fetched user config from "hetzner" Mar 14 00:08:14.464876 ignition[782]: fetch: fetch passed Mar 14 00:08:14.466671 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:08:14.464916 ignition[782]: Ignition finished successfully Mar 14 00:08:14.473748 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:08:14.486827 ignition[790]: Ignition 2.19.0 Mar 14 00:08:14.486841 ignition[790]: Stage: kargs Mar 14 00:08:14.487022 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:14.487032 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:14.487953 ignition[790]: kargs: kargs passed Mar 14 00:08:14.489829 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:08:14.487999 ignition[790]: Ignition finished successfully Mar 14 00:08:14.497835 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:08:14.512984 ignition[797]: Ignition 2.19.0 Mar 14 00:08:14.512996 ignition[797]: Stage: disks Mar 14 00:08:14.513178 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:14.513187 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:14.514291 ignition[797]: disks: disks passed Mar 14 00:08:14.514344 ignition[797]: Ignition finished successfully Mar 14 00:08:14.519674 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:08:14.521997 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:08:14.523691 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:08:14.525051 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:08:14.526248 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:08:14.527313 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:08:14.532713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:08:14.551370 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 14 00:08:14.555582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:08:14.563639 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:08:14.615588 kernel: EXT4-fs (sda9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:08:14.615332 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:08:14.616974 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:08:14.631722 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:08:14.635679 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:08:14.644097 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 14 00:08:14.645601 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:08:14.646747 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:08:14.650419 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:08:14.653627 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Mar 14 00:08:14.662146 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:08:14.662189 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:08:14.663568 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:08:14.662733 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:08:14.671752 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:08:14.671819 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:08:14.681182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:08:14.712383 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:08:14.715061 coreos-metadata[816]: Mar 14 00:08:14.714 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 14 00:08:14.717909 coreos-metadata[816]: Mar 14 00:08:14.716 INFO Fetch successful Mar 14 00:08:14.717909 coreos-metadata[816]: Mar 14 00:08:14.716 INFO wrote hostname ci-4081-3-6-n-0ed13f424d to /sysroot/etc/hostname Mar 14 00:08:14.719664 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:08:14.723038 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:08:14.728606 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:08:14.734675 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:08:14.832651 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:08:14.837727 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:08:14.842272 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:08:14.850589 kernel: BTRFS info (device sda6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:08:14.874414 ignition[932]: INFO : Ignition 2.19.0 Mar 14 00:08:14.874414 ignition[932]: INFO : Stage: mount Mar 14 00:08:14.874414 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:14.874414 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:14.874414 ignition[932]: INFO : mount: mount passed Mar 14 00:08:14.874414 ignition[932]: INFO : Ignition finished successfully Mar 14 00:08:14.875518 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:08:14.877312 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:08:14.885699 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:08:15.035735 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:08:15.042904 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:08:15.053572 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 14 00:08:15.055235 kernel: BTRFS info (device sda6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:08:15.055273 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:08:15.055285 kernel: BTRFS info (device sda6): using free space tree Mar 14 00:08:15.058548 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 14 00:08:15.058605 kernel: BTRFS info (device sda6): auto enabling async discard Mar 14 00:08:15.060916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:08:15.087872 ignition[960]: INFO : Ignition 2.19.0 Mar 14 00:08:15.088641 ignition[960]: INFO : Stage: files Mar 14 00:08:15.089019 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:15.089019 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:15.090370 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:08:15.092266 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:08:15.092266 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:08:15.097019 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:08:15.098422 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:08:15.100054 unknown[960]: wrote ssh authorized keys file for user: core Mar 14 00:08:15.101471 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:08:15.103131 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:08:15.104377 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:08:15.188297 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:08:15.315873 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:08:15.317485 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:08:15.328730 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:08:15.328730 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:08:15.328730 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 14 00:08:15.448555 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:08:15.592745 systemd-networkd[779]: eth0: Gained IPv6LL Mar 14 00:08:15.676311 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:08:15.677841 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:08:15.680284 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:08:15.680284 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:08:15.680284 ignition[960]: INFO : files: files passed Mar 14 00:08:15.680284 ignition[960]: INFO : Ignition finished successfully Mar 14 00:08:15.685629 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:08:15.693984 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:08:15.695721 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:08:15.699759 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:08:15.700483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:08:15.714880 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:08:15.714880 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:08:15.717704 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:08:15.721228 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:08:15.722252 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:08:15.726750 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:08:15.766796 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:08:15.767019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:08:15.769778 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:08:15.771144 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:08:15.772737 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:08:15.786755 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:08:15.803997 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:08:15.819843 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:08:15.833470 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:08:15.835096 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:08:15.835897 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:08:15.838406 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:08:15.838560 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:08:15.840395 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:08:15.841221 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:08:15.842404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:08:15.843589 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:08:15.844680 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:08:15.845776 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:08:15.846980 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:08:15.848302 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:08:15.848769 systemd-networkd[779]: eth1: Gained IPv6LL Mar 14 00:08:15.850476 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:08:15.851522 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:08:15.852329 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:08:15.852450 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:08:15.853775 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:08:15.854432 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:08:15.855531 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:08:15.855626 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:08:15.856792 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:08:15.856906 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:08:15.858470 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:08:15.858601 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:08:15.859956 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:08:15.860049 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:08:15.861247 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 14 00:08:15.861348 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 14 00:08:15.867733 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:08:15.871838 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:08:15.872334 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:08:15.872452 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:08:15.876276 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:08:15.876395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:08:15.887612 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:08:15.888339 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:08:15.894669 ignition[1012]: INFO : Ignition 2.19.0 Mar 14 00:08:15.894669 ignition[1012]: INFO : Stage: umount Mar 14 00:08:15.894669 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:08:15.894669 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 14 00:08:15.894669 ignition[1012]: INFO : umount: umount passed Mar 14 00:08:15.894669 ignition[1012]: INFO : Ignition finished successfully Mar 14 00:08:15.893663 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:08:15.893782 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:08:15.895966 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:08:15.896067 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:08:15.898101 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:08:15.898153 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:08:15.900368 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:08:15.900425 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:08:15.903878 systemd[1]: Stopped target network.target - Network. Mar 14 00:08:15.905617 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:08:15.905697 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:08:15.907361 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:08:15.909025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:08:15.913161 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:08:15.918323 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:08:15.920220 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:08:15.921137 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:08:15.921184 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:08:15.922184 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:08:15.922234 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:08:15.924242 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:08:15.924312 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:08:15.925215 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:08:15.925269 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:08:15.926589 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:08:15.928747 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:08:15.934615 systemd-networkd[779]: eth0: DHCPv6 lease lost Mar 14 00:08:15.935163 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:08:15.939614 systemd-networkd[779]: eth1: DHCPv6 lease lost Mar 14 00:08:15.940561 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:08:15.940659 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:08:15.942057 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:08:15.942146 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:08:15.945573 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:08:15.945683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:08:15.947770 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:08:15.947905 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:08:15.951018 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:08:15.951061 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:08:15.956713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:08:15.957198 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:08:15.957266 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:08:15.958355 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:08:15.958400 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:08:15.960292 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:08:15.960345 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:08:15.962648 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:08:15.962701 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:08:15.963714 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:08:15.977425 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:08:15.977591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:08:15.980726 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:08:15.980905 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:08:15.982288 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:08:15.982333 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:08:15.983263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:08:15.983295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:08:15.984229 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:08:15.984277 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:08:15.986120 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:08:15.986163 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:08:15.987599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:08:15.987647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:08:15.998905 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:08:16.000649 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:08:16.000720 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:08:16.001565 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:08:16.001615 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:08:16.002487 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:08:16.002559 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:08:16.004522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:08:16.004586 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:08:16.013252 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:08:16.013445 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:08:16.015711 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:08:16.020740 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:08:16.031888 systemd[1]: Switching root. Mar 14 00:08:16.062555 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 14 00:08:16.062612 systemd-journald[237]: Journal stopped Mar 14 00:08:16.932405 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:08:16.932468 kernel: SELinux: policy capability open_perms=1 Mar 14 00:08:16.932480 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:08:16.932493 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:08:16.932502 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:08:16.932512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:08:16.932521 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:08:16.934248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:08:16.934286 kernel: audit: type=1403 audit(1773446896.182:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:08:16.934305 systemd[1]: Successfully loaded SELinux policy in 35.097ms. Mar 14 00:08:16.934329 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.565ms. Mar 14 00:08:16.934343 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:08:16.934354 systemd[1]: Detected virtualization kvm. Mar 14 00:08:16.934365 systemd[1]: Detected architecture arm64. Mar 14 00:08:16.934375 systemd[1]: Detected first boot. Mar 14 00:08:16.934386 systemd[1]: Hostname set to . Mar 14 00:08:16.934396 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:08:16.934406 zram_generator::config[1054]: No configuration found. Mar 14 00:08:16.934420 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:08:16.934432 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:08:16.934442 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:08:16.934452 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:08:16.934463 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:08:16.934475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:08:16.934485 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:08:16.934495 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:08:16.934506 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:08:16.934516 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:08:16.934528 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:08:16.936579 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:08:16.936683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:08:16.936698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:08:16.936709 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:08:16.936720 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:08:16.936731 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:08:16.936742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:08:16.936753 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 14 00:08:16.936770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:08:16.936781 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:08:16.936796 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:08:16.936811 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:08:16.936822 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:08:16.936832 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:08:16.936848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:08:16.936858 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:08:16.936869 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:08:16.936900 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:08:16.936912 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:08:16.936923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:08:16.936933 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:08:16.936945 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:08:16.936956 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:08:16.936970 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:08:16.936981 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:08:16.936991 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:08:16.937002 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:08:16.937013 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:08:16.937023 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:08:16.937034 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:08:16.937045 systemd[1]: Reached target machines.target - Containers. Mar 14 00:08:16.937056 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:08:16.937105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:08:16.937117 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:08:16.937129 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:08:16.937142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:08:16.937154 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:08:16.937166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:08:16.937179 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:08:16.937189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:08:16.937214 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:08:16.937226 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:08:16.937237 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:08:16.937247 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:08:16.937257 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:08:16.937270 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:08:16.937281 kernel: loop: module loaded Mar 14 00:08:16.937292 kernel: fuse: init (API version 7.39) Mar 14 00:08:16.937302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:08:16.937314 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:08:16.937324 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:08:16.937335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:08:16.937347 kernel: ACPI: bus type drm_connector registered Mar 14 00:08:16.937357 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:08:16.937369 systemd[1]: Stopped verity-setup.service. Mar 14 00:08:16.937380 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:08:16.937390 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:08:16.937401 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:08:16.937413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:08:16.937425 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:08:16.937435 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:08:16.937446 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:08:16.937456 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:08:16.937467 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:08:16.937477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:08:16.937506 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:08:16.937520 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:08:16.937531 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:08:16.937578 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:08:16.937589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:08:16.937600 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:08:16.937610 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:08:16.937621 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:08:16.937632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:08:16.937645 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:08:16.937671 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:08:16.937712 systemd-journald[1117]: Collecting audit messages is disabled. Mar 14 00:08:16.937737 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:08:16.937749 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:08:16.937760 systemd-journald[1117]: Journal started Mar 14 00:08:16.937784 systemd-journald[1117]: Runtime Journal (/run/log/journal/e6da5a8dcf3e4b72afc54e5dea639fab) is 8.0M, max 76.6M, 68.6M free. Mar 14 00:08:16.943117 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:08:16.678073 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:08:16.696389 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 14 00:08:16.696822 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:08:16.951617 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:08:16.951652 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:08:16.951671 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:08:16.956564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:08:16.962550 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:08:16.972582 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:08:16.972638 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:08:16.979854 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:08:16.983550 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:08:16.993297 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:08:16.993360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:08:16.995934 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:08:16.999161 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:08:17.009885 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:08:17.009952 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:08:17.008702 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:08:17.012066 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:08:17.014865 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:08:17.016157 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:08:17.022567 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:08:17.023485 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:08:17.042553 kernel: loop0: detected capacity change from 0 to 114328 Mar 14 00:08:17.049123 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:08:17.059576 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:08:17.068673 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:08:17.070734 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:08:17.072445 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:08:17.073554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:08:17.087929 systemd-journald[1117]: Time spent on flushing to /var/log/journal/e6da5a8dcf3e4b72afc54e5dea639fab is 50.088ms for 1132 entries. Mar 14 00:08:17.087929 systemd-journald[1117]: System Journal (/var/log/journal/e6da5a8dcf3e4b72afc54e5dea639fab) is 8.0M, max 584.8M, 576.8M free. Mar 14 00:08:17.150648 systemd-journald[1117]: Received client request to flush runtime journal. Mar 14 00:08:17.150725 kernel: loop1: detected capacity change from 0 to 8 Mar 14 00:08:17.150741 kernel: loop2: detected capacity change from 0 to 197488 Mar 14 00:08:17.103299 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Mar 14 00:08:17.103311 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Mar 14 00:08:17.116716 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:08:17.119895 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:08:17.126750 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:08:17.135844 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:08:17.136863 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:08:17.153925 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:08:17.188558 kernel: loop3: detected capacity change from 0 to 114432 Mar 14 00:08:17.204575 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:08:17.212711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:08:17.238553 kernel: loop4: detected capacity change from 0 to 114328 Mar 14 00:08:17.248658 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 14 00:08:17.248676 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Mar 14 00:08:17.252882 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:08:17.263567 kernel: loop5: detected capacity change from 0 to 8 Mar 14 00:08:17.265565 kernel: loop6: detected capacity change from 0 to 197488 Mar 14 00:08:17.286564 kernel: loop7: detected capacity change from 0 to 114432 Mar 14 00:08:17.301207 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 14 00:08:17.301679 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 14 00:08:17.309810 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:08:17.309834 systemd[1]: Reloading... Mar 14 00:08:17.409955 zram_generator::config[1222]: No configuration found. Mar 14 00:08:17.545649 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:08:17.557882 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:08:17.604783 systemd[1]: Reloading finished in 294 ms. Mar 14 00:08:17.630762 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:08:17.633961 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:08:17.641980 systemd[1]: Starting ensure-sysext.service... Mar 14 00:08:17.647757 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:08:17.649611 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:08:17.661724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:08:17.664743 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:08:17.664764 systemd[1]: Reloading... Mar 14 00:08:17.666416 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:08:17.666689 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:08:17.667406 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:08:17.668024 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 14 00:08:17.668132 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 14 00:08:17.671269 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:08:17.671389 systemd-tmpfiles[1261]: Skipping /boot Mar 14 00:08:17.682136 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:08:17.682304 systemd-tmpfiles[1261]: Skipping /boot Mar 14 00:08:17.699927 systemd-udevd[1263]: Using default interface naming scheme 'v255'. Mar 14 00:08:17.775628 zram_generator::config[1301]: No configuration found. Mar 14 00:08:17.889559 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:08:17.929147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:08:17.948563 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1311) Mar 14 00:08:17.999639 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 14 00:08:18.000248 systemd[1]: Reloading finished in 335 ms. Mar 14 00:08:18.025812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:08:18.032633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:08:18.073350 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 14 00:08:18.088897 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:08:18.093846 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:08:18.095741 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:08:18.100921 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:08:18.104645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:08:18.111841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:08:18.115220 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 14 00:08:18.115274 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 14 00:08:18.115314 kernel: [drm] features: -context_init Mar 14 00:08:18.114741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:08:18.118582 kernel: [drm] number of scanouts: 1 Mar 14 00:08:18.118638 kernel: [drm] number of cap sets: 0 Mar 14 00:08:18.118858 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:08:18.120554 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 14 00:08:18.124815 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:08:18.127933 kernel: Console: switching to colour frame buffer device 160x50 Mar 14 00:08:18.133693 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:08:18.141560 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 14 00:08:18.144393 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:08:18.147759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:08:18.148699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:08:18.174345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:08:18.174593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:08:18.176708 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:08:18.176974 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:08:18.186209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 14 00:08:18.195709 systemd[1]: Finished ensure-sysext.service. Mar 14 00:08:18.202978 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:08:18.206856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:08:18.212842 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:08:18.217858 augenrules[1399]: No rules Mar 14 00:08:18.216816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:08:18.218610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:08:18.220168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:08:18.221511 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:08:18.232778 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:08:18.237773 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:08:18.243742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:08:18.248609 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:08:18.254880 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:08:18.255902 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:08:18.256036 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:08:18.257073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:08:18.257687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:08:18.264985 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:08:18.266930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:08:18.273838 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:08:18.275668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:08:18.276036 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:08:18.295356 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:08:18.296631 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:08:18.299092 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:08:18.305796 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:08:18.342552 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:08:18.364620 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:08:18.365516 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:08:18.377757 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:08:18.392548 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:08:18.395084 systemd-networkd[1376]: lo: Link UP Mar 14 00:08:18.395141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:08:18.396895 systemd-networkd[1376]: lo: Gained carrier Mar 14 00:08:18.400118 systemd-networkd[1376]: Enumeration completed Mar 14 00:08:18.401087 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:08:18.402363 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:18.402626 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:08:18.406826 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:18.408580 systemd-networkd[1376]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:08:18.409172 systemd-networkd[1376]: eth0: Link UP Mar 14 00:08:18.409176 systemd-networkd[1376]: eth0: Gained carrier Mar 14 00:08:18.409227 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:18.414870 systemd-resolved[1378]: Positive Trust Anchors: Mar 14 00:08:18.415035 systemd-resolved[1378]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:08:18.415067 systemd-resolved[1378]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:08:18.416928 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:08:18.419421 systemd-networkd[1376]: eth1: Link UP Mar 14 00:08:18.419425 systemd-networkd[1376]: eth1: Gained carrier Mar 14 00:08:18.419445 systemd-networkd[1376]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:08:18.420964 systemd-resolved[1378]: Using system hostname 'ci-4081-3-6-n-0ed13f424d'. Mar 14 00:08:18.422930 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:08:18.423671 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:08:18.424407 systemd[1]: Reached target network.target - Network. Mar 14 00:08:18.425919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:08:18.426683 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:08:18.427864 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:08:18.428675 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:08:18.429379 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:08:18.431164 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:08:18.431238 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:08:18.431801 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:08:18.432565 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:08:18.433262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:08:18.434048 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:08:18.435799 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:08:18.439154 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:08:18.448367 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:08:18.450477 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:08:18.451898 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:08:18.453853 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:08:18.454467 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:08:18.455139 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:08:18.455174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:08:18.456492 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:08:18.456697 systemd-networkd[1376]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 14 00:08:18.459007 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. Mar 14 00:08:18.459666 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:08:18.463968 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:08:18.467830 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:08:18.469867 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:08:18.470431 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:08:18.474494 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:08:18.476745 systemd-networkd[1376]: eth0: DHCPv4 address 46.224.38.228/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 14 00:08:18.476798 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:08:18.480099 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 14 00:08:18.483511 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:08:18.488143 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:08:18.492647 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:08:18.494074 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:08:18.495771 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:08:18.499742 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:08:18.505706 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:08:18.520182 extend-filesystems[1447]: Found loop4 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found loop5 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found loop6 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found loop7 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda1 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda2 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda3 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found usr Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda4 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda6 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda7 Mar 14 00:08:18.526915 extend-filesystems[1447]: Found sda9 Mar 14 00:08:18.526915 extend-filesystems[1447]: Checking size of /dev/sda9 Mar 14 00:08:18.544316 systemd-timesyncd[1408]: Contacted time server 51.75.67.47:123 (1.flatcar.pool.ntp.org). Mar 14 00:08:18.544420 systemd-timesyncd[1408]: Initial clock synchronization to Sat 2026-03-14 00:08:18.921133 UTC. Mar 14 00:08:18.545407 dbus-daemon[1445]: [system] SELinux support is enabled Mar 14 00:08:18.560024 jq[1446]: false Mar 14 00:08:18.560017 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:08:18.560239 jq[1455]: true Mar 14 00:08:18.564420 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:08:18.565351 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:08:18.573280 extend-filesystems[1447]: Resized partition /dev/sda9 Mar 14 00:08:18.574459 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:08:18.574792 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:08:18.575702 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:08:18.575870 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:08:18.582872 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:08:18.588628 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:08:18.588680 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:08:18.598555 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 14 00:08:18.594504 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:08:18.594553 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:08:18.601288 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:08:18.612173 tar[1458]: linux-arm64/LICENSE Mar 14 00:08:18.612173 tar[1458]: linux-arm64/helm Mar 14 00:08:18.636276 jq[1476]: true Mar 14 00:08:18.672262 coreos-metadata[1443]: Mar 14 00:08:18.671 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 14 00:08:18.675486 update_engine[1454]: I20260314 00:08:18.673584 1454 main.cc:92] Flatcar Update Engine starting Mar 14 00:08:18.679025 update_engine[1454]: I20260314 00:08:18.678977 1454 update_check_scheduler.cc:74] Next update check in 6m54s Mar 14 00:08:18.679592 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:08:18.680888 coreos-metadata[1443]: Mar 14 00:08:18.680 INFO Fetch successful Mar 14 00:08:18.680888 coreos-metadata[1443]: Mar 14 00:08:18.680 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 14 00:08:18.683213 coreos-metadata[1443]: Mar 14 00:08:18.683 INFO Fetch successful Mar 14 00:08:18.694935 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:08:18.763565 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 14 00:08:18.789743 extend-filesystems[1478]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 14 00:08:18.789743 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 14 00:08:18.789743 extend-filesystems[1478]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 14 00:08:18.796692 extend-filesystems[1447]: Resized filesystem in /dev/sda9 Mar 14 00:08:18.796692 extend-filesystems[1447]: Found sr0 Mar 14 00:08:18.808620 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:08:18.838007 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1290) Mar 14 00:08:18.855524 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:08:18.855725 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:08:18.860576 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:08:18.868854 systemd-logind[1453]: New seat seat0. Mar 14 00:08:18.873162 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:08:18.873219 systemd-logind[1453]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 14 00:08:18.873398 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:08:18.891915 systemd[1]: Starting sshkeys.service... Mar 14 00:08:18.951594 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:08:18.957865 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:08:18.960565 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:08:18.963840 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:08:18.981798 containerd[1479]: time="2026-03-14T00:08:18.980802080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:08:18.982968 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:08:19.036527 coreos-metadata[1527]: Mar 14 00:08:19.036 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 14 00:08:19.037385 coreos-metadata[1527]: Mar 14 00:08:19.036 INFO Fetch successful Mar 14 00:08:19.039145 unknown[1527]: wrote ssh authorized keys file for user: core Mar 14 00:08:19.045453 containerd[1479]: time="2026-03-14T00:08:19.044675085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.050273 containerd[1479]: time="2026-03-14T00:08:19.050230037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:08:19.050367 containerd[1479]: time="2026-03-14T00:08:19.050353136Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:08:19.050464 containerd[1479]: time="2026-03-14T00:08:19.050412947Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.050711040Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.050738475Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.050807836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.050821909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.050997280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.051013322Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.051027060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.051036526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.051108065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.051301237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052360 containerd[1479]: time="2026-03-14T00:08:19.051397613Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:08:19.052948 containerd[1479]: time="2026-03-14T00:08:19.051411351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:08:19.052948 containerd[1479]: time="2026-03-14T00:08:19.051499644Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:08:19.052948 containerd[1479]: time="2026-03-14T00:08:19.051548231Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:08:19.060285 containerd[1479]: time="2026-03-14T00:08:19.060249725Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:08:19.060523 containerd[1479]: time="2026-03-14T00:08:19.060504258Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:08:19.060793 containerd[1479]: time="2026-03-14T00:08:19.060773618Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:08:19.061306 containerd[1479]: time="2026-03-14T00:08:19.061286873Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:08:19.061374 containerd[1479]: time="2026-03-14T00:08:19.061361888Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:08:19.061573 containerd[1479]: time="2026-03-14T00:08:19.061554013Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:08:19.062829 containerd[1479]: time="2026-03-14T00:08:19.062806238Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:08:19.063374 containerd[1479]: time="2026-03-14T00:08:19.063039536Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:08:19.063374 containerd[1479]: time="2026-03-14T00:08:19.063065630Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:08:19.063374 containerd[1479]: time="2026-03-14T00:08:19.063079745Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:08:19.063374 containerd[1479]: time="2026-03-14T00:08:19.063102112Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.063374 containerd[1479]: time="2026-03-14T00:08:19.063116771Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064235762Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064267761Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064287322Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064302274Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064316264Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064328201Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064351573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064365813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064384829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064399405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064412808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064426253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064439111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.064668 containerd[1479]: time="2026-03-14T00:08:19.064452515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064466588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064482211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064497289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064516933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064530923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064547551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064572640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064780639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064796220Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064929832Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064948680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:08:19.065452 containerd[1479]: time="2026-03-14T00:08:19.064962041Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:08:19.066054 containerd[1479]: time="2026-03-14T00:08:19.064974858Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:08:19.066054 containerd[1479]: time="2026-03-14T00:08:19.065818582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.066268 containerd[1479]: time="2026-03-14T00:08:19.066153283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:08:19.066268 containerd[1479]: time="2026-03-14T00:08:19.066175817Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:08:19.066268 containerd[1479]: time="2026-03-14T00:08:19.066191272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:08:19.067993 containerd[1479]: time="2026-03-14T00:08:19.067497403Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:08:19.068546 containerd[1479]: time="2026-03-14T00:08:19.067571371Z" level=info msg="Connect containerd service" Mar 14 00:08:19.068546 containerd[1479]: time="2026-03-14T00:08:19.068182929Z" level=info msg="using legacy CRI server" Mar 14 00:08:19.068546 containerd[1479]: time="2026-03-14T00:08:19.068409693Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:08:19.068920 containerd[1479]: time="2026-03-14T00:08:19.068894047Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:08:19.071282 containerd[1479]: time="2026-03-14T00:08:19.071216174Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:08:19.072049 containerd[1479]: time="2026-03-14T00:08:19.071787020Z" level=info msg="Start subscribing containerd event" Mar 14 00:08:19.072049 containerd[1479]: time="2026-03-14T00:08:19.071841218Z" level=info msg="Start recovering state" Mar 14 00:08:19.073563 containerd[1479]: time="2026-03-14T00:08:19.073261192Z" level=info msg="Start event monitor" Mar 14 00:08:19.073563 containerd[1479]: time="2026-03-14T00:08:19.073289255Z" level=info msg="Start snapshots syncer" Mar 14 00:08:19.073563 containerd[1479]: time="2026-03-14T00:08:19.073300898Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:08:19.073563 containerd[1479]: time="2026-03-14T00:08:19.073309024Z" level=info msg="Start streaming server" Mar 14 00:08:19.075610 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:08:19.074061 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:08:19.075991 containerd[1479]: time="2026-03-14T00:08:19.073830404Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:08:19.075991 containerd[1479]: time="2026-03-14T00:08:19.073894572Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:08:19.075991 containerd[1479]: time="2026-03-14T00:08:19.073954173Z" level=info msg="containerd successfully booted in 0.096365s" Mar 14 00:08:19.078954 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:08:19.084918 systemd[1]: Finished sshkeys.service. Mar 14 00:08:19.384054 tar[1458]: linux-arm64/README.md Mar 14 00:08:19.396658 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:08:19.459968 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:08:19.483778 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:08:19.493032 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:08:19.502488 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:08:19.502774 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:08:19.510730 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:08:19.534800 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:08:19.544190 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:08:19.553133 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 14 00:08:19.555316 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:08:19.754936 systemd-networkd[1376]: eth0: Gained IPv6LL Mar 14 00:08:19.759616 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:08:19.760954 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:08:19.775102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:08:19.779614 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:08:19.820219 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:08:20.073207 systemd-networkd[1376]: eth1: Gained IPv6LL Mar 14 00:08:20.571887 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:08:20.573153 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:08:20.577998 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:08:20.578669 systemd[1]: Startup finished in 762ms (kernel) + 4.496s (initrd) + 4.430s (userspace) = 9.689s. Mar 14 00:08:21.033257 kubelet[1573]: E0314 00:08:21.033090 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:08:21.038049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:08:21.038223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:08:31.289525 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:08:31.299902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:08:31.415490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:08:31.428208 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:08:31.488628 kubelet[1592]: E0314 00:08:31.488527 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:08:31.493231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:08:31.493567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:08:41.724956 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:08:41.731987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:08:41.866602 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:08:41.878074 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:08:41.921315 kubelet[1607]: E0314 00:08:41.921269 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:08:41.924098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:08:41.924267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:08:51.975262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:08:51.984864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:08:52.096701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:08:52.101725 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:08:52.143467 kubelet[1623]: E0314 00:08:52.143376 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:08:52.147433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:08:52.147729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:09:02.225080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:09:02.233003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:02.363839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:02.380213 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:09:02.428109 kubelet[1638]: E0314 00:09:02.428038 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:09:02.431371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:09:02.431704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:09:02.826528 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:09:02.838110 systemd[1]: Started sshd@0-46.224.38.228:22-68.220.241.50:34138.service - OpenSSH per-connection server daemon (68.220.241.50:34138). Mar 14 00:09:03.428348 sshd[1646]: Accepted publickey for core from 68.220.241.50 port 34138 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:03.431350 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:03.444966 systemd-logind[1453]: New session 1 of user core. Mar 14 00:09:03.446560 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:09:03.462579 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:09:03.477639 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:09:03.489099 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:09:03.493189 (systemd)[1650]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:09:03.601466 systemd[1650]: Queued start job for default target default.target. Mar 14 00:09:03.617018 systemd[1650]: Created slice app.slice - User Application Slice. Mar 14 00:09:03.617320 systemd[1650]: Reached target paths.target - Paths. Mar 14 00:09:03.617356 systemd[1650]: Reached target timers.target - Timers. Mar 14 00:09:03.620723 systemd[1650]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:09:03.632452 systemd[1650]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:09:03.632721 systemd[1650]: Reached target sockets.target - Sockets. Mar 14 00:09:03.632814 systemd[1650]: Reached target basic.target - Basic System. Mar 14 00:09:03.632970 systemd[1650]: Reached target default.target - Main User Target. Mar 14 00:09:03.633069 systemd[1650]: Startup finished in 132ms. Mar 14 00:09:03.633187 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:09:03.641927 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:09:03.892554 update_engine[1454]: I20260314 00:09:03.890239 1454 update_attempter.cc:509] Updating boot flags... Mar 14 00:09:03.940557 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1669) Mar 14 00:09:03.999562 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1671) Mar 14 00:09:04.082230 systemd[1]: Started sshd@1-46.224.38.228:22-68.220.241.50:34148.service - OpenSSH per-connection server daemon (68.220.241.50:34148). Mar 14 00:09:04.689484 sshd[1679]: Accepted publickey for core from 68.220.241.50 port 34148 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:04.690692 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:04.696770 systemd-logind[1453]: New session 2 of user core. Mar 14 00:09:04.702891 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:09:05.120618 sshd[1679]: pam_unix(sshd:session): session closed for user core Mar 14 00:09:05.127040 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:09:05.127411 systemd[1]: sshd@1-46.224.38.228:22-68.220.241.50:34148.service: Deactivated successfully. Mar 14 00:09:05.130515 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:09:05.134216 systemd-logind[1453]: Removed session 2. Mar 14 00:09:05.233050 systemd[1]: Started sshd@2-46.224.38.228:22-68.220.241.50:34160.service - OpenSSH per-connection server daemon (68.220.241.50:34160). Mar 14 00:09:05.818585 sshd[1686]: Accepted publickey for core from 68.220.241.50 port 34160 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:05.819831 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:05.826114 systemd-logind[1453]: New session 3 of user core. Mar 14 00:09:05.828739 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:09:06.232303 sshd[1686]: pam_unix(sshd:session): session closed for user core Mar 14 00:09:06.237873 systemd[1]: sshd@2-46.224.38.228:22-68.220.241.50:34160.service: Deactivated successfully. Mar 14 00:09:06.240820 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:09:06.243861 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:09:06.245783 systemd-logind[1453]: Removed session 3. Mar 14 00:09:06.345228 systemd[1]: Started sshd@3-46.224.38.228:22-68.220.241.50:34170.service - OpenSSH per-connection server daemon (68.220.241.50:34170). Mar 14 00:09:06.931070 sshd[1693]: Accepted publickey for core from 68.220.241.50 port 34170 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:06.932273 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:06.937826 systemd-logind[1453]: New session 4 of user core. Mar 14 00:09:06.946987 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:09:07.350087 sshd[1693]: pam_unix(sshd:session): session closed for user core Mar 14 00:09:07.354719 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:09:07.354927 systemd[1]: sshd@3-46.224.38.228:22-68.220.241.50:34170.service: Deactivated successfully. Mar 14 00:09:07.356761 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:09:07.359573 systemd-logind[1453]: Removed session 4. Mar 14 00:09:07.454283 systemd[1]: Started sshd@4-46.224.38.228:22-68.220.241.50:34180.service - OpenSSH per-connection server daemon (68.220.241.50:34180). Mar 14 00:09:08.045410 sshd[1700]: Accepted publickey for core from 68.220.241.50 port 34180 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:08.046572 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:08.053618 systemd-logind[1453]: New session 5 of user core. Mar 14 00:09:08.062853 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:09:08.377801 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:09:08.378099 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:09:08.393411 sudo[1703]: pam_unix(sudo:session): session closed for user root Mar 14 00:09:08.488224 sshd[1700]: pam_unix(sshd:session): session closed for user core Mar 14 00:09:08.493049 systemd[1]: sshd@4-46.224.38.228:22-68.220.241.50:34180.service: Deactivated successfully. Mar 14 00:09:08.495412 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:09:08.496916 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:09:08.499215 systemd-logind[1453]: Removed session 5. Mar 14 00:09:08.600852 systemd[1]: Started sshd@5-46.224.38.228:22-68.220.241.50:34194.service - OpenSSH per-connection server daemon (68.220.241.50:34194). Mar 14 00:09:09.195580 sshd[1708]: Accepted publickey for core from 68.220.241.50 port 34194 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:09.197633 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:09.204691 systemd-logind[1453]: New session 6 of user core. Mar 14 00:09:09.210085 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:09:09.519137 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:09:09.519877 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:09:09.524162 sudo[1712]: pam_unix(sudo:session): session closed for user root Mar 14 00:09:09.530964 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:09:09.531323 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:09:09.550048 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:09:09.552626 auditctl[1715]: No rules Mar 14 00:09:09.553420 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:09:09.553769 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:09:09.557970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:09:09.589448 augenrules[1733]: No rules Mar 14 00:09:09.590935 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:09:09.594154 sudo[1711]: pam_unix(sudo:session): session closed for user root Mar 14 00:09:09.688221 sshd[1708]: pam_unix(sshd:session): session closed for user core Mar 14 00:09:09.695282 systemd[1]: sshd@5-46.224.38.228:22-68.220.241.50:34194.service: Deactivated successfully. Mar 14 00:09:09.700186 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:09:09.701178 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:09:09.703189 systemd-logind[1453]: Removed session 6. Mar 14 00:09:09.793322 systemd[1]: Started sshd@6-46.224.38.228:22-68.220.241.50:34198.service - OpenSSH per-connection server daemon (68.220.241.50:34198). Mar 14 00:09:10.395079 sshd[1741]: Accepted publickey for core from 68.220.241.50 port 34198 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:09:10.396658 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:09:10.403007 systemd-logind[1453]: New session 7 of user core. Mar 14 00:09:10.411944 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:09:10.719759 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:09:10.720039 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:09:11.031961 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:09:11.032830 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:09:11.275698 dockerd[1761]: time="2026-03-14T00:09:11.275151551Z" level=info msg="Starting up" Mar 14 00:09:11.393483 dockerd[1761]: time="2026-03-14T00:09:11.393012491Z" level=info msg="Loading containers: start." Mar 14 00:09:11.496636 kernel: Initializing XFRM netlink socket Mar 14 00:09:11.579378 systemd-networkd[1376]: docker0: Link UP Mar 14 00:09:11.596906 dockerd[1761]: time="2026-03-14T00:09:11.596797839Z" level=info msg="Loading containers: done." Mar 14 00:09:11.610672 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1216539148-merged.mount: Deactivated successfully. Mar 14 00:09:11.616301 dockerd[1761]: time="2026-03-14T00:09:11.616209304Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:09:11.616489 dockerd[1761]: time="2026-03-14T00:09:11.616362764Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:09:11.616581 dockerd[1761]: time="2026-03-14T00:09:11.616520907Z" level=info msg="Daemon has completed initialization" Mar 14 00:09:11.672662 dockerd[1761]: time="2026-03-14T00:09:11.670917106Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:09:11.672016 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:09:12.120457 containerd[1479]: time="2026-03-14T00:09:12.120195925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:09:12.475234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:09:12.482831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:12.614023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:12.621918 (kubelet)[1908]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:09:12.676589 kubelet[1908]: E0314 00:09:12.676207 1908 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:09:12.681397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:09:12.681621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:09:12.703894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161952826.mount: Deactivated successfully. Mar 14 00:09:13.688094 containerd[1479]: time="2026-03-14T00:09:13.688008373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:13.689757 containerd[1479]: time="2026-03-14T00:09:13.689714991Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701894" Mar 14 00:09:13.692561 containerd[1479]: time="2026-03-14T00:09:13.690690944Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:13.698434 containerd[1479]: time="2026-03-14T00:09:13.698377006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:13.700039 containerd[1479]: time="2026-03-14T00:09:13.700001314Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 1.57975221s" Mar 14 00:09:13.700160 containerd[1479]: time="2026-03-14T00:09:13.700142965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 14 00:09:13.701192 containerd[1479]: time="2026-03-14T00:09:13.701165976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:09:14.727060 containerd[1479]: time="2026-03-14T00:09:14.727013345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:14.728594 containerd[1479]: time="2026-03-14T00:09:14.728567804Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063059" Mar 14 00:09:14.729809 containerd[1479]: time="2026-03-14T00:09:14.729782146Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:14.733485 containerd[1479]: time="2026-03-14T00:09:14.733453540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:14.734519 containerd[1479]: time="2026-03-14T00:09:14.734475535Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.033180793s" Mar 14 00:09:14.734519 containerd[1479]: time="2026-03-14T00:09:14.734516429Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 14 00:09:14.735133 containerd[1479]: time="2026-03-14T00:09:14.735095510Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:09:15.667868 containerd[1479]: time="2026-03-14T00:09:15.667770067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:15.670590 containerd[1479]: time="2026-03-14T00:09:15.670136056Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797921" Mar 14 00:09:15.670590 containerd[1479]: time="2026-03-14T00:09:15.670191794Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:15.673662 containerd[1479]: time="2026-03-14T00:09:15.673606051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:15.675730 containerd[1479]: time="2026-03-14T00:09:15.675595474Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 940.461871ms" Mar 14 00:09:15.675730 containerd[1479]: time="2026-03-14T00:09:15.675637088Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 14 00:09:15.676314 containerd[1479]: time="2026-03-14T00:09:15.676158942Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:09:16.631083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4093739765.mount: Deactivated successfully. Mar 14 00:09:16.856650 containerd[1479]: time="2026-03-14T00:09:16.856584517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:16.858342 containerd[1479]: time="2026-03-14T00:09:16.858295465Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329609" Mar 14 00:09:16.859240 containerd[1479]: time="2026-03-14T00:09:16.859193712Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:16.861351 containerd[1479]: time="2026-03-14T00:09:16.861273818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:16.862077 containerd[1479]: time="2026-03-14T00:09:16.862035902Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.185843989s" Mar 14 00:09:16.862077 containerd[1479]: time="2026-03-14T00:09:16.862072073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 14 00:09:16.862934 containerd[1479]: time="2026-03-14T00:09:16.862520337Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:09:17.362119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount589327169.mount: Deactivated successfully. Mar 14 00:09:18.340435 containerd[1479]: time="2026-03-14T00:09:18.340207871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:18.344802 containerd[1479]: time="2026-03-14T00:09:18.344433482Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Mar 14 00:09:18.347783 containerd[1479]: time="2026-03-14T00:09:18.346988199Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:18.350808 containerd[1479]: time="2026-03-14T00:09:18.350750193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:18.353236 containerd[1479]: time="2026-03-14T00:09:18.352153169Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.489582257s" Mar 14 00:09:18.353236 containerd[1479]: time="2026-03-14T00:09:18.352200383Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 14 00:09:18.353236 containerd[1479]: time="2026-03-14T00:09:18.352672683Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:09:18.808025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3354361336.mount: Deactivated successfully. Mar 14 00:09:18.814720 containerd[1479]: time="2026-03-14T00:09:18.814658768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:18.816082 containerd[1479]: time="2026-03-14T00:09:18.815785901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Mar 14 00:09:18.817192 containerd[1479]: time="2026-03-14T00:09:18.817143984Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:18.820522 containerd[1479]: time="2026-03-14T00:09:18.820474850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:18.821926 containerd[1479]: time="2026-03-14T00:09:18.821752589Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 469.052258ms" Mar 14 00:09:18.821926 containerd[1479]: time="2026-03-14T00:09:18.821792081Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 14 00:09:18.822407 containerd[1479]: time="2026-03-14T00:09:18.822227850Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:09:19.327982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692115121.mount: Deactivated successfully. Mar 14 00:09:20.002924 containerd[1479]: time="2026-03-14T00:09:20.002854273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:20.005581 containerd[1479]: time="2026-03-14T00:09:20.005513205Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738239" Mar 14 00:09:20.007807 containerd[1479]: time="2026-03-14T00:09:20.007706449Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:20.012619 containerd[1479]: time="2026-03-14T00:09:20.012562626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:20.014827 containerd[1479]: time="2026-03-14T00:09:20.014676608Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.192405146s" Mar 14 00:09:20.014827 containerd[1479]: time="2026-03-14T00:09:20.014718299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 14 00:09:22.724947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 14 00:09:22.733723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:22.855882 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:22.856257 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:09:22.899474 kubelet[2140]: E0314 00:09:22.899415 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:09:22.901298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:09:22.901417 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:09:22.911096 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:22.920989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:22.959059 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit session-7.scope)... Mar 14 00:09:22.959214 systemd[1]: Reloading... Mar 14 00:09:23.075580 zram_generator::config[2193]: No configuration found. Mar 14 00:09:23.181243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:09:23.258958 systemd[1]: Reloading finished in 299 ms. Mar 14 00:09:23.318097 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:09:23.318191 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:09:23.318808 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:23.322859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:23.446170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:23.457982 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:09:23.495857 kubelet[2241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:09:24.071584 kubelet[2241]: I0314 00:09:24.071151 2241 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:09:24.071584 kubelet[2241]: I0314 00:09:24.071205 2241 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:09:24.071584 kubelet[2241]: I0314 00:09:24.071231 2241 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:09:24.071584 kubelet[2241]: I0314 00:09:24.071236 2241 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:09:24.071584 kubelet[2241]: I0314 00:09:24.071590 2241 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:09:24.083577 kubelet[2241]: E0314 00:09:24.082493 2241 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.224.38.228:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.38.228:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:09:24.086362 kubelet[2241]: I0314 00:09:24.086302 2241 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:09:24.092337 kubelet[2241]: E0314 00:09:24.092280 2241 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:09:24.092440 kubelet[2241]: I0314 00:09:24.092358 2241 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:09:24.095551 kubelet[2241]: I0314 00:09:24.095205 2241 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:09:24.096879 kubelet[2241]: I0314 00:09:24.096811 2241 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:09:24.097171 kubelet[2241]: I0314 00:09:24.096885 2241 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-0ed13f424d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:09:24.097259 kubelet[2241]: I0314 00:09:24.097174 2241 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:09:24.097259 kubelet[2241]: I0314 00:09:24.097190 2241 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:09:24.097357 kubelet[2241]: I0314 00:09:24.097338 2241 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:09:24.099918 kubelet[2241]: I0314 00:09:24.099869 2241 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:09:24.100583 kubelet[2241]: I0314 00:09:24.100131 2241 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:09:24.100583 kubelet[2241]: I0314 00:09:24.100146 2241 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:09:24.100583 kubelet[2241]: I0314 00:09:24.100162 2241 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:09:24.100583 kubelet[2241]: I0314 00:09:24.100183 2241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:09:24.104073 kubelet[2241]: I0314 00:09:24.104032 2241 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:09:24.105290 kubelet[2241]: I0314 00:09:24.105253 2241 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:09:24.105290 kubelet[2241]: I0314 00:09:24.105294 2241 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:09:24.105393 kubelet[2241]: W0314 00:09:24.105337 2241 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:09:24.107791 kubelet[2241]: I0314 00:09:24.107765 2241 server.go:1257] "Started kubelet" Mar 14 00:09:24.111568 kubelet[2241]: I0314 00:09:24.110703 2241 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:09:24.111876 kubelet[2241]: I0314 00:09:24.111854 2241 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:09:24.112486 kubelet[2241]: I0314 00:09:24.111883 2241 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:09:24.112620 kubelet[2241]: I0314 00:09:24.112604 2241 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:09:24.112950 kubelet[2241]: I0314 00:09:24.112932 2241 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:09:24.115579 kubelet[2241]: I0314 00:09:24.115557 2241 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:09:24.115885 kubelet[2241]: E0314 00:09:24.114656 2241 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.38.228:6443/api/v1/namespaces/default/events\": dial tcp 46.224.38.228:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-0ed13f424d.189c8c9e23a2fe80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-0ed13f424d,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-0ed13f424d,},FirstTimestamp:2026-03-14 00:09:24.107738752 +0000 UTC m=+0.645047054,LastTimestamp:2026-03-14 00:09:24.107738752 +0000 UTC m=+0.645047054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-0ed13f424d,}" Mar 14 00:09:24.116572 kubelet[2241]: I0314 00:09:24.116371 2241 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:09:24.118020 kubelet[2241]: I0314 00:09:24.117993 2241 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:09:24.118407 kubelet[2241]: E0314 00:09:24.118380 2241 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" Mar 14 00:09:24.119222 kubelet[2241]: I0314 00:09:24.119193 2241 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:09:24.120284 kubelet[2241]: I0314 00:09:24.120269 2241 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:09:24.121498 kubelet[2241]: E0314 00:09:24.121470 2241 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.38.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0ed13f424d?timeout=10s\": dial tcp 46.224.38.228:6443: connect: connection refused" interval="200ms" Mar 14 00:09:24.123003 kubelet[2241]: I0314 00:09:24.122973 2241 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:09:24.123207 kubelet[2241]: I0314 00:09:24.123188 2241 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:09:24.125139 kubelet[2241]: E0314 00:09:24.125119 2241 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:09:24.126362 kubelet[2241]: I0314 00:09:24.126346 2241 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:09:24.151281 kubelet[2241]: I0314 00:09:24.151003 2241 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:09:24.151418 kubelet[2241]: I0314 00:09:24.151405 2241 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:09:24.151480 kubelet[2241]: I0314 00:09:24.151471 2241 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:09:24.157361 kubelet[2241]: I0314 00:09:24.157330 2241 policy_none.go:50] "Start" Mar 14 00:09:24.157361 kubelet[2241]: I0314 00:09:24.157358 2241 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:09:24.157485 kubelet[2241]: I0314 00:09:24.157371 2241 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:09:24.159594 kubelet[2241]: I0314 00:09:24.159257 2241 policy_none.go:44] "Start" Mar 14 00:09:24.161436 kubelet[2241]: I0314 00:09:24.161389 2241 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:09:24.162921 kubelet[2241]: I0314 00:09:24.162896 2241 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:09:24.162921 kubelet[2241]: I0314 00:09:24.162918 2241 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:09:24.163011 kubelet[2241]: I0314 00:09:24.162953 2241 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:09:24.163033 kubelet[2241]: E0314 00:09:24.162995 2241 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:09:24.165515 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:09:24.180636 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:09:24.184031 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:09:24.194578 kubelet[2241]: E0314 00:09:24.193405 2241 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:09:24.194578 kubelet[2241]: I0314 00:09:24.193760 2241 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:09:24.194578 kubelet[2241]: I0314 00:09:24.193782 2241 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:09:24.195253 kubelet[2241]: I0314 00:09:24.194529 2241 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:09:24.197980 kubelet[2241]: E0314 00:09:24.197593 2241 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:09:24.197980 kubelet[2241]: E0314 00:09:24.197704 2241 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-0ed13f424d\" not found" Mar 14 00:09:24.282035 systemd[1]: Created slice kubepods-burstable-pod98b0eeedf9844dccccb33c15b6df4a0e.slice - libcontainer container kubepods-burstable-pod98b0eeedf9844dccccb33c15b6df4a0e.slice. Mar 14 00:09:24.297643 kubelet[2241]: I0314 00:09:24.297512 2241 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.298578 kubelet[2241]: E0314 00:09:24.298260 2241 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://46.224.38.228:6443/api/v1/nodes\": dial tcp 46.224.38.228:6443: connect: connection refused" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.301092 kubelet[2241]: E0314 00:09:24.301053 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.302149 systemd[1]: Created slice kubepods-burstable-pod278cfbf105e654dcb286680ddb6f3a87.slice - libcontainer container kubepods-burstable-pod278cfbf105e654dcb286680ddb6f3a87.slice. Mar 14 00:09:24.304595 kubelet[2241]: E0314 00:09:24.304528 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.307019 systemd[1]: Created slice kubepods-burstable-pod2099efd1b54f997fb4eb72983d6e1e03.slice - libcontainer container kubepods-burstable-pod2099efd1b54f997fb4eb72983d6e1e03.slice. Mar 14 00:09:24.308791 kubelet[2241]: E0314 00:09:24.308742 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.322716 kubelet[2241]: I0314 00:09:24.322514 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b0eeedf9844dccccb33c15b6df4a0e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" (UID: \"98b0eeedf9844dccccb33c15b6df4a0e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.322716 kubelet[2241]: I0314 00:09:24.322598 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b0eeedf9844dccccb33c15b6df4a0e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" (UID: \"98b0eeedf9844dccccb33c15b6df4a0e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.322716 kubelet[2241]: I0314 00:09:24.322628 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.322716 kubelet[2241]: I0314 00:09:24.322651 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.322716 kubelet[2241]: I0314 00:09:24.322673 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b0eeedf9844dccccb33c15b6df4a0e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" (UID: \"98b0eeedf9844dccccb33c15b6df4a0e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.323058 kubelet[2241]: I0314 00:09:24.322694 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.323058 kubelet[2241]: I0314 00:09:24.322714 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.323058 kubelet[2241]: I0314 00:09:24.322735 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.323058 kubelet[2241]: I0314 00:09:24.322758 2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2099efd1b54f997fb4eb72983d6e1e03-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-0ed13f424d\" (UID: \"2099efd1b54f997fb4eb72983d6e1e03\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.323058 kubelet[2241]: E0314 00:09:24.323007 2241 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.38.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0ed13f424d?timeout=10s\": dial tcp 46.224.38.228:6443: connect: connection refused" interval="400ms" Mar 14 00:09:24.501446 kubelet[2241]: I0314 00:09:24.501398 2241 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.502031 kubelet[2241]: E0314 00:09:24.501970 2241 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://46.224.38.228:6443/api/v1/nodes\": dial tcp 46.224.38.228:6443: connect: connection refused" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.605814 containerd[1479]: time="2026-03-14T00:09:24.605599085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-0ed13f424d,Uid:98b0eeedf9844dccccb33c15b6df4a0e,Namespace:kube-system,Attempt:0,}" Mar 14 00:09:24.607407 containerd[1479]: time="2026-03-14T00:09:24.607276009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-0ed13f424d,Uid:278cfbf105e654dcb286680ddb6f3a87,Namespace:kube-system,Attempt:0,}" Mar 14 00:09:24.611883 containerd[1479]: time="2026-03-14T00:09:24.611637659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-0ed13f424d,Uid:2099efd1b54f997fb4eb72983d6e1e03,Namespace:kube-system,Attempt:0,}" Mar 14 00:09:24.724481 kubelet[2241]: E0314 00:09:24.724425 2241 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.38.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0ed13f424d?timeout=10s\": dial tcp 46.224.38.228:6443: connect: connection refused" interval="800ms" Mar 14 00:09:24.904498 kubelet[2241]: I0314 00:09:24.904394 2241 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:24.905214 kubelet[2241]: E0314 00:09:24.905173 2241 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://46.224.38.228:6443/api/v1/nodes\": dial tcp 46.224.38.228:6443: connect: connection refused" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:25.071966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4076854960.mount: Deactivated successfully. Mar 14 00:09:25.076599 containerd[1479]: time="2026-03-14T00:09:25.076519976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:09:25.084495 containerd[1479]: time="2026-03-14T00:09:25.084395054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 14 00:09:25.086565 containerd[1479]: time="2026-03-14T00:09:25.085954818Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:09:25.087422 containerd[1479]: time="2026-03-14T00:09:25.087389553Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:09:25.088265 containerd[1479]: time="2026-03-14T00:09:25.088237391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:09:25.088403 containerd[1479]: time="2026-03-14T00:09:25.088367821Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:09:25.089475 containerd[1479]: time="2026-03-14T00:09:25.089434350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:09:25.091986 containerd[1479]: time="2026-03-14T00:09:25.091898285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:09:25.095619 containerd[1479]: time="2026-03-14T00:09:25.095565341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.852545ms" Mar 14 00:09:25.098909 containerd[1479]: time="2026-03-14T00:09:25.098520751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.158841ms" Mar 14 00:09:25.103484 containerd[1479]: time="2026-03-14T00:09:25.103400490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.708143ms" Mar 14 00:09:25.222511 containerd[1479]: time="2026-03-14T00:09:25.221993690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:25.222511 containerd[1479]: time="2026-03-14T00:09:25.222067547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:25.222511 containerd[1479]: time="2026-03-14T00:09:25.222084511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:25.222511 containerd[1479]: time="2026-03-14T00:09:25.222164290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:25.224591 containerd[1479]: time="2026-03-14T00:09:25.223304716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:25.224591 containerd[1479]: time="2026-03-14T00:09:25.223355768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:25.224591 containerd[1479]: time="2026-03-14T00:09:25.223370571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:25.224591 containerd[1479]: time="2026-03-14T00:09:25.223435906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:25.225050 containerd[1479]: time="2026-03-14T00:09:25.224752214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:25.225050 containerd[1479]: time="2026-03-14T00:09:25.224806226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:25.225050 containerd[1479]: time="2026-03-14T00:09:25.224821870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:25.225050 containerd[1479]: time="2026-03-14T00:09:25.224889686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:25.253737 systemd[1]: Started cri-containerd-3113718275e545fbc6d45493569987674ff16f8d0ed2ec769a97014332e5dcdd.scope - libcontainer container 3113718275e545fbc6d45493569987674ff16f8d0ed2ec769a97014332e5dcdd. Mar 14 00:09:25.255217 systemd[1]: Started cri-containerd-d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b.scope - libcontainer container d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b. Mar 14 00:09:25.260865 systemd[1]: Started cri-containerd-525b8186947323188c180c05dd974a4f24ec99f07b5c58e293c85fc3048688fa.scope - libcontainer container 525b8186947323188c180c05dd974a4f24ec99f07b5c58e293c85fc3048688fa. Mar 14 00:09:25.314706 containerd[1479]: time="2026-03-14T00:09:25.314558455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-0ed13f424d,Uid:2099efd1b54f997fb4eb72983d6e1e03,Namespace:kube-system,Attempt:0,} returns sandbox id \"3113718275e545fbc6d45493569987674ff16f8d0ed2ec769a97014332e5dcdd\"" Mar 14 00:09:25.317513 containerd[1479]: time="2026-03-14T00:09:25.317473215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-0ed13f424d,Uid:278cfbf105e654dcb286680ddb6f3a87,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b\"" Mar 14 00:09:25.324661 containerd[1479]: time="2026-03-14T00:09:25.324450924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-0ed13f424d,Uid:98b0eeedf9844dccccb33c15b6df4a0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"525b8186947323188c180c05dd974a4f24ec99f07b5c58e293c85fc3048688fa\"" Mar 14 00:09:25.326905 containerd[1479]: time="2026-03-14T00:09:25.326849764Z" level=info msg="CreateContainer within sandbox \"3113718275e545fbc6d45493569987674ff16f8d0ed2ec769a97014332e5dcdd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:09:25.331101 containerd[1479]: time="2026-03-14T00:09:25.330987449Z" level=info msg="CreateContainer within sandbox \"d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:09:25.333581 containerd[1479]: time="2026-03-14T00:09:25.333519520Z" level=info msg="CreateContainer within sandbox \"525b8186947323188c180c05dd974a4f24ec99f07b5c58e293c85fc3048688fa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:09:25.347859 containerd[1479]: time="2026-03-14T00:09:25.347705031Z" level=info msg="CreateContainer within sandbox \"d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971\"" Mar 14 00:09:25.348759 containerd[1479]: time="2026-03-14T00:09:25.348683300Z" level=info msg="StartContainer for \"f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971\"" Mar 14 00:09:25.350110 containerd[1479]: time="2026-03-14T00:09:25.350052019Z" level=info msg="CreateContainer within sandbox \"525b8186947323188c180c05dd974a4f24ec99f07b5c58e293c85fc3048688fa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e38017edec4e30794b2808941fd61bd4dce887fc1fbe31598d198bf4793295cb\"" Mar 14 00:09:25.351171 containerd[1479]: time="2026-03-14T00:09:25.351140153Z" level=info msg="CreateContainer within sandbox \"3113718275e545fbc6d45493569987674ff16f8d0ed2ec769a97014332e5dcdd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ac16838401c27651c5566454226257abf4d06bc5f1e3c170cb4db7af9528885\"" Mar 14 00:09:25.351929 containerd[1479]: time="2026-03-14T00:09:25.351904772Z" level=info msg="StartContainer for \"7ac16838401c27651c5566454226257abf4d06bc5f1e3c170cb4db7af9528885\"" Mar 14 00:09:25.354818 containerd[1479]: time="2026-03-14T00:09:25.354780483Z" level=info msg="StartContainer for \"e38017edec4e30794b2808941fd61bd4dce887fc1fbe31598d198bf4793295cb\"" Mar 14 00:09:25.385723 systemd[1]: Started cri-containerd-7ac16838401c27651c5566454226257abf4d06bc5f1e3c170cb4db7af9528885.scope - libcontainer container 7ac16838401c27651c5566454226257abf4d06bc5f1e3c170cb4db7af9528885. Mar 14 00:09:25.394607 systemd[1]: Started cri-containerd-e38017edec4e30794b2808941fd61bd4dce887fc1fbe31598d198bf4793295cb.scope - libcontainer container e38017edec4e30794b2808941fd61bd4dce887fc1fbe31598d198bf4793295cb. Mar 14 00:09:25.396828 systemd[1]: Started cri-containerd-f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971.scope - libcontainer container f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971. Mar 14 00:09:25.460995 containerd[1479]: time="2026-03-14T00:09:25.460348883Z" level=info msg="StartContainer for \"f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971\" returns successfully" Mar 14 00:09:25.466792 containerd[1479]: time="2026-03-14T00:09:25.466745016Z" level=info msg="StartContainer for \"7ac16838401c27651c5566454226257abf4d06bc5f1e3c170cb4db7af9528885\" returns successfully" Mar 14 00:09:25.475022 containerd[1479]: time="2026-03-14T00:09:25.474982218Z" level=info msg="StartContainer for \"e38017edec4e30794b2808941fd61bd4dce887fc1fbe31598d198bf4793295cb\" returns successfully" Mar 14 00:09:25.525178 kubelet[2241]: E0314 00:09:25.525091 2241 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.38.228:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-0ed13f424d?timeout=10s\": dial tcp 46.224.38.228:6443: connect: connection refused" interval="1.6s" Mar 14 00:09:25.709955 kubelet[2241]: I0314 00:09:25.709919 2241 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:26.178323 kubelet[2241]: E0314 00:09:26.178254 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:26.181035 kubelet[2241]: E0314 00:09:26.181005 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:26.184216 kubelet[2241]: E0314 00:09:26.184186 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.172112 kubelet[2241]: E0314 00:09:27.172059 2241 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.188200 kubelet[2241]: E0314 00:09:27.188168 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.188677 kubelet[2241]: E0314 00:09:27.188550 2241 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.312457 kubelet[2241]: I0314 00:09:27.312403 2241 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.319587 kubelet[2241]: I0314 00:09:27.319096 2241 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.328063 kubelet[2241]: E0314 00:09:27.328025 2241 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-0ed13f424d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.328063 kubelet[2241]: I0314 00:09:27.328058 2241 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.332501 kubelet[2241]: E0314 00:09:27.332462 2241 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.332883 kubelet[2241]: I0314 00:09:27.332669 2241 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:27.335068 kubelet[2241]: E0314 00:09:27.335031 2241 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:28.104918 kubelet[2241]: I0314 00:09:28.104550 2241 apiserver.go:52] "Watching apiserver" Mar 14 00:09:28.119490 kubelet[2241]: I0314 00:09:28.119459 2241 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:09:28.189196 kubelet[2241]: I0314 00:09:28.188951 2241 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:29.414914 systemd[1]: Reloading requested from client PID 2524 ('systemctl') (unit session-7.scope)... Mar 14 00:09:29.415270 systemd[1]: Reloading... Mar 14 00:09:29.501561 zram_generator::config[2563]: No configuration found. Mar 14 00:09:29.625100 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:09:29.715681 systemd[1]: Reloading finished in 300 ms. Mar 14 00:09:29.763437 kubelet[2241]: I0314 00:09:29.762998 2241 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:09:29.763243 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:29.780799 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:09:29.781088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:29.781159 systemd[1]: kubelet.service: Consumed 1.036s CPU time, 123.5M memory peak, 0B memory swap peak. Mar 14 00:09:29.790014 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:09:29.914905 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:09:29.925924 (kubelet)[2609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:09:29.983957 kubelet[2609]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:09:29.990893 kubelet[2609]: I0314 00:09:29.990837 2609 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:09:29.990893 kubelet[2609]: I0314 00:09:29.990887 2609 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:09:29.990893 kubelet[2609]: I0314 00:09:29.990901 2609 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:09:29.990893 kubelet[2609]: I0314 00:09:29.990905 2609 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:09:29.991206 kubelet[2609]: I0314 00:09:29.991187 2609 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:09:29.992522 kubelet[2609]: I0314 00:09:29.992432 2609 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:09:29.995005 kubelet[2609]: I0314 00:09:29.994723 2609 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:09:30.003332 kubelet[2609]: E0314 00:09:30.003284 2609 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:09:30.003465 kubelet[2609]: I0314 00:09:30.003351 2609 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:09:30.009306 kubelet[2609]: I0314 00:09:30.009053 2609 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:09:30.009484 kubelet[2609]: I0314 00:09:30.009327 2609 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:09:30.009655 kubelet[2609]: I0314 00:09:30.009350 2609 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-0ed13f424d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:09:30.009655 kubelet[2609]: I0314 00:09:30.009633 2609 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:09:30.009655 kubelet[2609]: I0314 00:09:30.009643 2609 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:09:30.009934 kubelet[2609]: I0314 00:09:30.009677 2609 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:09:30.009934 kubelet[2609]: I0314 00:09:30.009899 2609 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:09:30.010038 kubelet[2609]: I0314 00:09:30.010033 2609 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:09:30.010113 kubelet[2609]: I0314 00:09:30.010051 2609 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:09:30.010113 kubelet[2609]: I0314 00:09:30.010067 2609 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:09:30.010233 kubelet[2609]: I0314 00:09:30.010122 2609 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:09:30.013588 kubelet[2609]: I0314 00:09:30.012795 2609 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:09:30.014680 kubelet[2609]: I0314 00:09:30.014657 2609 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:09:30.014795 kubelet[2609]: I0314 00:09:30.014785 2609 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:09:30.018522 kubelet[2609]: I0314 00:09:30.018500 2609 server.go:1257] "Started kubelet" Mar 14 00:09:30.020645 kubelet[2609]: I0314 00:09:30.020624 2609 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:09:30.020819 kubelet[2609]: I0314 00:09:30.020778 2609 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:09:30.021643 kubelet[2609]: I0314 00:09:30.021620 2609 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:09:30.028072 kubelet[2609]: I0314 00:09:30.028011 2609 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:09:30.028180 kubelet[2609]: I0314 00:09:30.028099 2609 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:09:30.028257 kubelet[2609]: I0314 00:09:30.028236 2609 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:09:30.035115 kubelet[2609]: I0314 00:09:30.035076 2609 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:09:30.035377 kubelet[2609]: E0314 00:09:30.035329 2609 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-0ed13f424d\" not found" Mar 14 00:09:30.037560 kubelet[2609]: I0314 00:09:30.035914 2609 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:09:30.038722 kubelet[2609]: I0314 00:09:30.038692 2609 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:09:30.038829 kubelet[2609]: I0314 00:09:30.038809 2609 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:09:30.042865 kubelet[2609]: I0314 00:09:30.042821 2609 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:09:30.044684 kubelet[2609]: I0314 00:09:30.044656 2609 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:09:30.044684 kubelet[2609]: I0314 00:09:30.044681 2609 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:09:30.044794 kubelet[2609]: I0314 00:09:30.044704 2609 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:09:30.044794 kubelet[2609]: E0314 00:09:30.044744 2609 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:09:30.058083 kubelet[2609]: I0314 00:09:30.058050 2609 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:09:30.067755 kubelet[2609]: I0314 00:09:30.066657 2609 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:09:30.067755 kubelet[2609]: I0314 00:09:30.066681 2609 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:09:30.121635 kubelet[2609]: I0314 00:09:30.121605 2609 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:09:30.121819 kubelet[2609]: I0314 00:09:30.121776 2609 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:09:30.122636 kubelet[2609]: I0314 00:09:30.122609 2609 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:09:30.122866 kubelet[2609]: I0314 00:09:30.122850 2609 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:09:30.122936 kubelet[2609]: I0314 00:09:30.122913 2609 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:09:30.122987 kubelet[2609]: I0314 00:09:30.122980 2609 policy_none.go:50] "Start" Mar 14 00:09:30.123038 kubelet[2609]: I0314 00:09:30.123030 2609 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:09:30.123094 kubelet[2609]: I0314 00:09:30.123085 2609 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:09:30.123795 kubelet[2609]: I0314 00:09:30.123475 2609 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:09:30.123929 kubelet[2609]: I0314 00:09:30.123915 2609 policy_none.go:44] "Start" Mar 14 00:09:30.129934 kubelet[2609]: E0314 00:09:30.129900 2609 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:09:30.130204 kubelet[2609]: I0314 00:09:30.130179 2609 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:09:30.130271 kubelet[2609]: I0314 00:09:30.130196 2609 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:09:30.130822 kubelet[2609]: I0314 00:09:30.130797 2609 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:09:30.132964 kubelet[2609]: E0314 00:09:30.132864 2609 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:09:30.146049 kubelet[2609]: I0314 00:09:30.146018 2609 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.148579 kubelet[2609]: I0314 00:09:30.146399 2609 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.148579 kubelet[2609]: I0314 00:09:30.146700 2609 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.156751 kubelet[2609]: E0314 00:09:30.156720 2609 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.237589 kubelet[2609]: I0314 00:09:30.235636 2609 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242117 kubelet[2609]: I0314 00:09:30.242026 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242544 kubelet[2609]: I0314 00:09:30.242344 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242752 kubelet[2609]: I0314 00:09:30.242616 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2099efd1b54f997fb4eb72983d6e1e03-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-0ed13f424d\" (UID: \"2099efd1b54f997fb4eb72983d6e1e03\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242752 kubelet[2609]: I0314 00:09:30.242676 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b0eeedf9844dccccb33c15b6df4a0e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" (UID: \"98b0eeedf9844dccccb33c15b6df4a0e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242752 kubelet[2609]: I0314 00:09:30.242697 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b0eeedf9844dccccb33c15b6df4a0e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" (UID: \"98b0eeedf9844dccccb33c15b6df4a0e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242752 kubelet[2609]: I0314 00:09:30.242721 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242752 kubelet[2609]: I0314 00:09:30.242736 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b0eeedf9844dccccb33c15b6df4a0e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" (UID: \"98b0eeedf9844dccccb33c15b6df4a0e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242926 kubelet[2609]: I0314 00:09:30.242907 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.242926 kubelet[2609]: I0314 00:09:30.242966 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/278cfbf105e654dcb286680ddb6f3a87-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-0ed13f424d\" (UID: \"278cfbf105e654dcb286680ddb6f3a87\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.246681 kubelet[2609]: I0314 00:09:30.246654 2609 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:30.246757 kubelet[2609]: I0314 00:09:30.246734 2609 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:31.015950 kubelet[2609]: I0314 00:09:31.015899 2609 apiserver.go:52] "Watching apiserver" Mar 14 00:09:31.039743 kubelet[2609]: I0314 00:09:31.039697 2609 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:09:31.105449 kubelet[2609]: I0314 00:09:31.105398 2609 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:31.110738 kubelet[2609]: I0314 00:09:31.107526 2609 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:31.120579 kubelet[2609]: E0314 00:09:31.119840 2609 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-0ed13f424d\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:31.121721 kubelet[2609]: E0314 00:09:31.121688 2609 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-0ed13f424d\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" Mar 14 00:09:32.152745 kubelet[2609]: I0314 00:09:32.152650 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-0ed13f424d" podStartSLOduration=2.152581636 podStartE2EDuration="2.152581636s" podCreationTimestamp="2026-03-14 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:09:32.150311317 +0000 UTC m=+2.220181128" watchObservedRunningTime="2026-03-14 00:09:32.152581636 +0000 UTC m=+2.222451527" Mar 14 00:09:32.153285 kubelet[2609]: I0314 00:09:32.152838 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-0ed13f424d" podStartSLOduration=2.152829804 podStartE2EDuration="2.152829804s" podCreationTimestamp="2026-03-14 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:09:32.140065895 +0000 UTC m=+2.209935746" watchObservedRunningTime="2026-03-14 00:09:32.152829804 +0000 UTC m=+2.222699735" Mar 14 00:09:32.550112 kubelet[2609]: I0314 00:09:32.549479 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-0ed13f424d" podStartSLOduration=4.549459642 podStartE2EDuration="4.549459642s" podCreationTimestamp="2026-03-14 00:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:09:32.165694532 +0000 UTC m=+2.235564383" watchObservedRunningTime="2026-03-14 00:09:32.549459642 +0000 UTC m=+2.619329493" Mar 14 00:09:35.597675 kubelet[2609]: I0314 00:09:35.597476 2609 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:09:35.598094 containerd[1479]: time="2026-03-14T00:09:35.598036743Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:09:35.598377 kubelet[2609]: I0314 00:09:35.598283 2609 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:09:36.682445 systemd[1]: Created slice kubepods-besteffort-podc89426f4_ffda_4609_ae95_2cd68ecb610b.slice - libcontainer container kubepods-besteffort-podc89426f4_ffda_4609_ae95_2cd68ecb610b.slice. Mar 14 00:09:36.683590 kubelet[2609]: I0314 00:09:36.683507 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c89426f4-ffda-4609-ae95-2cd68ecb610b-kube-proxy\") pod \"kube-proxy-w2zmn\" (UID: \"c89426f4-ffda-4609-ae95-2cd68ecb610b\") " pod="kube-system/kube-proxy-w2zmn" Mar 14 00:09:36.683590 kubelet[2609]: I0314 00:09:36.683564 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hncl\" (UniqueName: \"kubernetes.io/projected/c89426f4-ffda-4609-ae95-2cd68ecb610b-kube-api-access-7hncl\") pod \"kube-proxy-w2zmn\" (UID: \"c89426f4-ffda-4609-ae95-2cd68ecb610b\") " pod="kube-system/kube-proxy-w2zmn" Mar 14 00:09:36.683590 kubelet[2609]: I0314 00:09:36.683589 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c89426f4-ffda-4609-ae95-2cd68ecb610b-xtables-lock\") pod \"kube-proxy-w2zmn\" (UID: \"c89426f4-ffda-4609-ae95-2cd68ecb610b\") " pod="kube-system/kube-proxy-w2zmn" Mar 14 00:09:36.683904 kubelet[2609]: I0314 00:09:36.683606 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c89426f4-ffda-4609-ae95-2cd68ecb610b-lib-modules\") pod \"kube-proxy-w2zmn\" (UID: \"c89426f4-ffda-4609-ae95-2cd68ecb610b\") " pod="kube-system/kube-proxy-w2zmn" Mar 14 00:09:36.862984 systemd[1]: Created slice kubepods-besteffort-pod055ae80c_f5bf_463b_9abc_eb7e4186a276.slice - libcontainer container kubepods-besteffort-pod055ae80c_f5bf_463b_9abc_eb7e4186a276.slice. Mar 14 00:09:36.884278 kubelet[2609]: I0314 00:09:36.884138 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/055ae80c-f5bf-463b-9abc-eb7e4186a276-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-5xcpm\" (UID: \"055ae80c-f5bf-463b-9abc-eb7e4186a276\") " pod="tigera-operator/tigera-operator-6cf4cccc57-5xcpm" Mar 14 00:09:36.884278 kubelet[2609]: I0314 00:09:36.884200 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h49h\" (UniqueName: \"kubernetes.io/projected/055ae80c-f5bf-463b-9abc-eb7e4186a276-kube-api-access-4h49h\") pod \"tigera-operator-6cf4cccc57-5xcpm\" (UID: \"055ae80c-f5bf-463b-9abc-eb7e4186a276\") " pod="tigera-operator/tigera-operator-6cf4cccc57-5xcpm" Mar 14 00:09:36.994115 containerd[1479]: time="2026-03-14T00:09:36.994061023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2zmn,Uid:c89426f4-ffda-4609-ae95-2cd68ecb610b,Namespace:kube-system,Attempt:0,}" Mar 14 00:09:37.023398 containerd[1479]: time="2026-03-14T00:09:37.023238729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:37.023398 containerd[1479]: time="2026-03-14T00:09:37.023306381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:37.023981 containerd[1479]: time="2026-03-14T00:09:37.023322223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:37.023981 containerd[1479]: time="2026-03-14T00:09:37.023412199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:37.050868 systemd[1]: Started cri-containerd-a3d5bf27d61e574858d3074c7b724afc9646d5b64e848f3f432962190f5e5341.scope - libcontainer container a3d5bf27d61e574858d3074c7b724afc9646d5b64e848f3f432962190f5e5341. Mar 14 00:09:37.079474 containerd[1479]: time="2026-03-14T00:09:37.079249201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2zmn,Uid:c89426f4-ffda-4609-ae95-2cd68ecb610b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3d5bf27d61e574858d3074c7b724afc9646d5b64e848f3f432962190f5e5341\"" Mar 14 00:09:37.085986 containerd[1479]: time="2026-03-14T00:09:37.085948448Z" level=info msg="CreateContainer within sandbox \"a3d5bf27d61e574858d3074c7b724afc9646d5b64e848f3f432962190f5e5341\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:09:37.101229 containerd[1479]: time="2026-03-14T00:09:37.101154535Z" level=info msg="CreateContainer within sandbox \"a3d5bf27d61e574858d3074c7b724afc9646d5b64e848f3f432962190f5e5341\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54c9105d1f7a9952327dc08226fadafa9eb5b35bf9224dbec25c043d4ea7b454\"" Mar 14 00:09:37.103773 containerd[1479]: time="2026-03-14T00:09:37.102358585Z" level=info msg="StartContainer for \"54c9105d1f7a9952327dc08226fadafa9eb5b35bf9224dbec25c043d4ea7b454\"" Mar 14 00:09:37.133751 systemd[1]: Started cri-containerd-54c9105d1f7a9952327dc08226fadafa9eb5b35bf9224dbec25c043d4ea7b454.scope - libcontainer container 54c9105d1f7a9952327dc08226fadafa9eb5b35bf9224dbec25c043d4ea7b454. Mar 14 00:09:37.171981 containerd[1479]: time="2026-03-14T00:09:37.171809037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-5xcpm,Uid:055ae80c-f5bf-463b-9abc-eb7e4186a276,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:09:37.172796 containerd[1479]: time="2026-03-14T00:09:37.172598455Z" level=info msg="StartContainer for \"54c9105d1f7a9952327dc08226fadafa9eb5b35bf9224dbec25c043d4ea7b454\" returns successfully" Mar 14 00:09:37.212296 containerd[1479]: time="2026-03-14T00:09:37.212204591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:37.213514 containerd[1479]: time="2026-03-14T00:09:37.213391637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:37.213514 containerd[1479]: time="2026-03-14T00:09:37.213474932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:37.213837 containerd[1479]: time="2026-03-14T00:09:37.213789827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:37.244801 systemd[1]: Started cri-containerd-8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0.scope - libcontainer container 8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0. Mar 14 00:09:37.286526 containerd[1479]: time="2026-03-14T00:09:37.286458199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-5xcpm,Uid:055ae80c-f5bf-463b-9abc-eb7e4186a276,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0\"" Mar 14 00:09:37.289864 containerd[1479]: time="2026-03-14T00:09:37.289821905Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:09:37.804397 systemd[1]: run-containerd-runc-k8s.io-a3d5bf27d61e574858d3074c7b724afc9646d5b64e848f3f432962190f5e5341-runc.hDgHGM.mount: Deactivated successfully. Mar 14 00:09:38.832530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318046340.mount: Deactivated successfully. Mar 14 00:09:39.263682 containerd[1479]: time="2026-03-14T00:09:39.262746767Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:39.263682 containerd[1479]: time="2026-03-14T00:09:39.263597990Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 14 00:09:39.265051 containerd[1479]: time="2026-03-14T00:09:39.264993744Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:39.268384 containerd[1479]: time="2026-03-14T00:09:39.268167557Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:39.269585 containerd[1479]: time="2026-03-14T00:09:39.269152963Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 1.979290571s" Mar 14 00:09:39.269585 containerd[1479]: time="2026-03-14T00:09:39.269192729Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 14 00:09:39.275344 containerd[1479]: time="2026-03-14T00:09:39.275302476Z" level=info msg="CreateContainer within sandbox \"8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:09:39.291174 containerd[1479]: time="2026-03-14T00:09:39.291026957Z" level=info msg="CreateContainer within sandbox \"8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08\"" Mar 14 00:09:39.293091 containerd[1479]: time="2026-03-14T00:09:39.293057618Z" level=info msg="StartContainer for \"07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08\"" Mar 14 00:09:39.318737 systemd[1]: Started cri-containerd-07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08.scope - libcontainer container 07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08. Mar 14 00:09:39.344712 containerd[1479]: time="2026-03-14T00:09:39.343993414Z" level=info msg="StartContainer for \"07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08\" returns successfully" Mar 14 00:09:39.567155 kubelet[2609]: I0314 00:09:39.566604 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-w2zmn" podStartSLOduration=3.566586404 podStartE2EDuration="3.566586404s" podCreationTimestamp="2026-03-14 00:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:09:38.136841956 +0000 UTC m=+8.206711807" watchObservedRunningTime="2026-03-14 00:09:39.566586404 +0000 UTC m=+9.636456255" Mar 14 00:09:42.570225 kubelet[2609]: I0314 00:09:42.570120 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-5xcpm" podStartSLOduration=4.588137603 podStartE2EDuration="6.570104633s" podCreationTimestamp="2026-03-14 00:09:36 +0000 UTC" firstStartedPulling="2026-03-14 00:09:37.288153455 +0000 UTC m=+7.358023306" lastFinishedPulling="2026-03-14 00:09:39.270120485 +0000 UTC m=+9.339990336" observedRunningTime="2026-03-14 00:09:40.149361401 +0000 UTC m=+10.219231252" watchObservedRunningTime="2026-03-14 00:09:42.570104633 +0000 UTC m=+12.639974484" Mar 14 00:09:45.628694 sudo[1744]: pam_unix(sudo:session): session closed for user root Mar 14 00:09:45.725551 sshd[1741]: pam_unix(sshd:session): session closed for user core Mar 14 00:09:45.729111 systemd[1]: sshd@6-46.224.38.228:22-68.220.241.50:34198.service: Deactivated successfully. Mar 14 00:09:45.731190 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:09:45.731409 systemd[1]: session-7.scope: Consumed 5.127s CPU time, 154.1M memory peak, 0B memory swap peak. Mar 14 00:09:45.733034 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:09:45.735400 systemd-logind[1453]: Removed session 7. Mar 14 00:09:53.416563 systemd[1]: Created slice kubepods-besteffort-podbeb867c5_021a_47d7_a9e3_50f70b8374ed.slice - libcontainer container kubepods-besteffort-podbeb867c5_021a_47d7_a9e3_50f70b8374ed.slice. Mar 14 00:09:53.492371 kubelet[2609]: I0314 00:09:53.492308 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/beb867c5-021a-47d7-a9e3-50f70b8374ed-typha-certs\") pod \"calico-typha-cb7db68f7-krhtm\" (UID: \"beb867c5-021a-47d7-a9e3-50f70b8374ed\") " pod="calico-system/calico-typha-cb7db68f7-krhtm" Mar 14 00:09:53.492371 kubelet[2609]: I0314 00:09:53.492351 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/beb867c5-021a-47d7-a9e3-50f70b8374ed-tigera-ca-bundle\") pod \"calico-typha-cb7db68f7-krhtm\" (UID: \"beb867c5-021a-47d7-a9e3-50f70b8374ed\") " pod="calico-system/calico-typha-cb7db68f7-krhtm" Mar 14 00:09:53.492371 kubelet[2609]: I0314 00:09:53.492373 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxwkh\" (UniqueName: \"kubernetes.io/projected/beb867c5-021a-47d7-a9e3-50f70b8374ed-kube-api-access-zxwkh\") pod \"calico-typha-cb7db68f7-krhtm\" (UID: \"beb867c5-021a-47d7-a9e3-50f70b8374ed\") " pod="calico-system/calico-typha-cb7db68f7-krhtm" Mar 14 00:09:53.553966 systemd[1]: Created slice kubepods-besteffort-podeac54517_7626_4a42_bed8_c3d7f7e52f69.slice - libcontainer container kubepods-besteffort-podeac54517_7626_4a42_bed8_c3d7f7e52f69.slice. Mar 14 00:09:53.594150 kubelet[2609]: I0314 00:09:53.593371 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-policysync\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594150 kubelet[2609]: I0314 00:09:53.593586 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-var-run-calico\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594150 kubelet[2609]: I0314 00:09:53.593668 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-lib-modules\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594150 kubelet[2609]: I0314 00:09:53.593717 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfwr\" (UniqueName: \"kubernetes.io/projected/eac54517-7626-4a42-bed8-c3d7f7e52f69-kube-api-access-wtfwr\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594150 kubelet[2609]: I0314 00:09:53.593760 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-bpffs\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594723 kubelet[2609]: I0314 00:09:53.593806 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-cni-bin-dir\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594723 kubelet[2609]: I0314 00:09:53.593921 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-var-lib-calico\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594723 kubelet[2609]: I0314 00:09:53.593991 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eac54517-7626-4a42-bed8-c3d7f7e52f69-tigera-ca-bundle\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.594723 kubelet[2609]: I0314 00:09:53.594029 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-nodeproc\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.597423 kubelet[2609]: I0314 00:09:53.595460 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-cni-net-dir\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.597423 kubelet[2609]: I0314 00:09:53.595531 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-xtables-lock\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.597423 kubelet[2609]: I0314 00:09:53.596633 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-cni-log-dir\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.597423 kubelet[2609]: I0314 00:09:53.596701 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-flexvol-driver-host\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.597423 kubelet[2609]: I0314 00:09:53.596741 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eac54517-7626-4a42-bed8-c3d7f7e52f69-node-certs\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.597783 kubelet[2609]: I0314 00:09:53.596780 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/eac54517-7626-4a42-bed8-c3d7f7e52f69-sys-fs\") pod \"calico-node-62zmq\" (UID: \"eac54517-7626-4a42-bed8-c3d7f7e52f69\") " pod="calico-system/calico-node-62zmq" Mar 14 00:09:53.663170 kubelet[2609]: E0314 00:09:53.663118 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:09:53.697452 kubelet[2609]: I0314 00:09:53.697233 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg829\" (UniqueName: \"kubernetes.io/projected/668745bf-4f34-407b-807f-7c7e77d971d9-kube-api-access-hg829\") pod \"csi-node-driver-qb2mw\" (UID: \"668745bf-4f34-407b-807f-7c7e77d971d9\") " pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:09:53.697452 kubelet[2609]: I0314 00:09:53.697323 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/668745bf-4f34-407b-807f-7c7e77d971d9-kubelet-dir\") pod \"csi-node-driver-qb2mw\" (UID: \"668745bf-4f34-407b-807f-7c7e77d971d9\") " pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:09:53.697452 kubelet[2609]: I0314 00:09:53.697368 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/668745bf-4f34-407b-807f-7c7e77d971d9-socket-dir\") pod \"csi-node-driver-qb2mw\" (UID: \"668745bf-4f34-407b-807f-7c7e77d971d9\") " pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:09:53.697452 kubelet[2609]: I0314 00:09:53.697407 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/668745bf-4f34-407b-807f-7c7e77d971d9-varrun\") pod \"csi-node-driver-qb2mw\" (UID: \"668745bf-4f34-407b-807f-7c7e77d971d9\") " pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:09:53.698094 kubelet[2609]: I0314 00:09:53.697487 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/668745bf-4f34-407b-807f-7c7e77d971d9-registration-dir\") pod \"csi-node-driver-qb2mw\" (UID: \"668745bf-4f34-407b-807f-7c7e77d971d9\") " pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:09:53.703786 kubelet[2609]: E0314 00:09:53.703749 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.703786 kubelet[2609]: W0314 00:09:53.703776 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.703946 kubelet[2609]: E0314 00:09:53.703805 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.707974 kubelet[2609]: E0314 00:09:53.707924 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.707974 kubelet[2609]: W0314 00:09:53.707951 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.707974 kubelet[2609]: E0314 00:09:53.707972 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.728027 kubelet[2609]: E0314 00:09:53.725648 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.728027 kubelet[2609]: W0314 00:09:53.725673 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.728027 kubelet[2609]: E0314 00:09:53.725694 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.728841 containerd[1479]: time="2026-03-14T00:09:53.728612500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb7db68f7-krhtm,Uid:beb867c5-021a-47d7-a9e3-50f70b8374ed,Namespace:calico-system,Attempt:0,}" Mar 14 00:09:53.761389 containerd[1479]: time="2026-03-14T00:09:53.761240396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:53.761514 containerd[1479]: time="2026-03-14T00:09:53.761455787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:53.761743 containerd[1479]: time="2026-03-14T00:09:53.761501593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:53.763025 containerd[1479]: time="2026-03-14T00:09:53.762830541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:53.789235 systemd[1]: Started cri-containerd-249030aca0355693ae853bc6b76cadb0dec021dda04cc45603d9aad2b728f36b.scope - libcontainer container 249030aca0355693ae853bc6b76cadb0dec021dda04cc45603d9aad2b728f36b. Mar 14 00:09:53.800054 kubelet[2609]: E0314 00:09:53.799835 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.800054 kubelet[2609]: W0314 00:09:53.799857 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.800054 kubelet[2609]: E0314 00:09:53.799916 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.800297 kubelet[2609]: E0314 00:09:53.800278 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.800360 kubelet[2609]: W0314 00:09:53.800349 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.800413 kubelet[2609]: E0314 00:09:53.800402 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.800989 kubelet[2609]: E0314 00:09:53.800944 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.800989 kubelet[2609]: W0314 00:09:53.800979 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.800989 kubelet[2609]: E0314 00:09:53.800996 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.801404 kubelet[2609]: E0314 00:09:53.801385 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.801404 kubelet[2609]: W0314 00:09:53.801399 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.801500 kubelet[2609]: E0314 00:09:53.801411 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.801905 kubelet[2609]: E0314 00:09:53.801874 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.801905 kubelet[2609]: W0314 00:09:53.801890 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.801905 kubelet[2609]: E0314 00:09:53.801904 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.802745 kubelet[2609]: E0314 00:09:53.802721 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.802951 kubelet[2609]: W0314 00:09:53.802832 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.802951 kubelet[2609]: E0314 00:09:53.802850 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.803228 kubelet[2609]: E0314 00:09:53.803211 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.803386 kubelet[2609]: W0314 00:09:53.803293 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.803386 kubelet[2609]: E0314 00:09:53.803322 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.803823 kubelet[2609]: E0314 00:09:53.803707 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.803823 kubelet[2609]: W0314 00:09:53.803719 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.803823 kubelet[2609]: E0314 00:09:53.803748 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.804163 kubelet[2609]: E0314 00:09:53.804141 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.804293 kubelet[2609]: W0314 00:09:53.804217 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.804293 kubelet[2609]: E0314 00:09:53.804239 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.804468 kubelet[2609]: E0314 00:09:53.804448 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.804468 kubelet[2609]: W0314 00:09:53.804467 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.804612 kubelet[2609]: E0314 00:09:53.804490 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.805635 kubelet[2609]: E0314 00:09:53.805463 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.805635 kubelet[2609]: W0314 00:09:53.805482 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.805635 kubelet[2609]: E0314 00:09:53.805493 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.805964 kubelet[2609]: E0314 00:09:53.805951 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.806093 kubelet[2609]: W0314 00:09:53.806031 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.806093 kubelet[2609]: E0314 00:09:53.806047 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.807778 kubelet[2609]: E0314 00:09:53.807668 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.807778 kubelet[2609]: W0314 00:09:53.807685 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.807778 kubelet[2609]: E0314 00:09:53.807697 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.808365 kubelet[2609]: E0314 00:09:53.808240 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.808365 kubelet[2609]: W0314 00:09:53.808254 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.808365 kubelet[2609]: E0314 00:09:53.808274 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.808670 kubelet[2609]: E0314 00:09:53.808590 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.808670 kubelet[2609]: W0314 00:09:53.808601 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.808670 kubelet[2609]: E0314 00:09:53.808615 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.809108 kubelet[2609]: E0314 00:09:53.809012 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.809108 kubelet[2609]: W0314 00:09:53.809024 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.809108 kubelet[2609]: E0314 00:09:53.809036 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.809422 kubelet[2609]: E0314 00:09:53.809345 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.809422 kubelet[2609]: W0314 00:09:53.809365 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.809422 kubelet[2609]: E0314 00:09:53.809377 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.809754 kubelet[2609]: E0314 00:09:53.809726 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.809889 kubelet[2609]: W0314 00:09:53.809828 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.809889 kubelet[2609]: E0314 00:09:53.809844 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.810339 kubelet[2609]: E0314 00:09:53.810326 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.810565 kubelet[2609]: W0314 00:09:53.810399 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.810565 kubelet[2609]: E0314 00:09:53.810416 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.810717 kubelet[2609]: E0314 00:09:53.810707 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.810771 kubelet[2609]: W0314 00:09:53.810762 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.810905 kubelet[2609]: E0314 00:09:53.810812 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.811272 kubelet[2609]: E0314 00:09:53.811260 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.811432 kubelet[2609]: W0314 00:09:53.811340 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.811432 kubelet[2609]: E0314 00:09:53.811355 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.811695 kubelet[2609]: E0314 00:09:53.811633 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.811695 kubelet[2609]: W0314 00:09:53.811644 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.811695 kubelet[2609]: E0314 00:09:53.811654 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.812331 kubelet[2609]: E0314 00:09:53.812135 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.812331 kubelet[2609]: W0314 00:09:53.812155 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.812331 kubelet[2609]: E0314 00:09:53.812169 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.813141 kubelet[2609]: E0314 00:09:53.813040 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.813141 kubelet[2609]: W0314 00:09:53.813054 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.813141 kubelet[2609]: E0314 00:09:53.813065 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.813456 kubelet[2609]: E0314 00:09:53.813409 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.813456 kubelet[2609]: W0314 00:09:53.813422 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.813456 kubelet[2609]: E0314 00:09:53.813433 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.829033 kubelet[2609]: E0314 00:09:53.828952 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:53.829213 kubelet[2609]: W0314 00:09:53.829095 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:53.829213 kubelet[2609]: E0314 00:09:53.829117 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:53.840777 containerd[1479]: time="2026-03-14T00:09:53.840742963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cb7db68f7-krhtm,Uid:beb867c5-021a-47d7-a9e3-50f70b8374ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"249030aca0355693ae853bc6b76cadb0dec021dda04cc45603d9aad2b728f36b\"" Mar 14 00:09:53.843603 containerd[1479]: time="2026-03-14T00:09:53.843229355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:09:53.860285 containerd[1479]: time="2026-03-14T00:09:53.860189634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-62zmq,Uid:eac54517-7626-4a42-bed8-c3d7f7e52f69,Namespace:calico-system,Attempt:0,}" Mar 14 00:09:53.887608 containerd[1479]: time="2026-03-14T00:09:53.887458892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:09:53.887608 containerd[1479]: time="2026-03-14T00:09:53.887509939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:09:53.887608 containerd[1479]: time="2026-03-14T00:09:53.887520741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:53.889063 containerd[1479]: time="2026-03-14T00:09:53.887946121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:09:53.905720 systemd[1]: Started cri-containerd-696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9.scope - libcontainer container 696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9. Mar 14 00:09:53.932904 containerd[1479]: time="2026-03-14T00:09:53.932855834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-62zmq,Uid:eac54517-7626-4a42-bed8-c3d7f7e52f69,Namespace:calico-system,Attempt:0,} returns sandbox id \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\"" Mar 14 00:09:55.046429 kubelet[2609]: E0314 00:09:55.046011 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:09:55.203123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047310049.mount: Deactivated successfully. Mar 14 00:09:55.636853 containerd[1479]: time="2026-03-14T00:09:55.636735286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:55.638810 containerd[1479]: time="2026-03-14T00:09:55.638651420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 14 00:09:55.639953 containerd[1479]: time="2026-03-14T00:09:55.639818874Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:55.642192 containerd[1479]: time="2026-03-14T00:09:55.642129904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:55.643167 containerd[1479]: time="2026-03-14T00:09:55.642843830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.799577793s" Mar 14 00:09:55.643167 containerd[1479]: time="2026-03-14T00:09:55.642881104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 14 00:09:55.646039 containerd[1479]: time="2026-03-14T00:09:55.645957493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:09:55.661083 containerd[1479]: time="2026-03-14T00:09:55.660856473Z" level=info msg="CreateContainer within sandbox \"249030aca0355693ae853bc6b76cadb0dec021dda04cc45603d9aad2b728f36b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:09:55.685530 containerd[1479]: time="2026-03-14T00:09:55.685179427Z" level=info msg="CreateContainer within sandbox \"249030aca0355693ae853bc6b76cadb0dec021dda04cc45603d9aad2b728f36b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aaaed888db27bab62284a90565a4a397a84976be18150f872347a92f5b4bc938\"" Mar 14 00:09:55.687528 containerd[1479]: time="2026-03-14T00:09:55.687481499Z" level=info msg="StartContainer for \"aaaed888db27bab62284a90565a4a397a84976be18150f872347a92f5b4bc938\"" Mar 14 00:09:55.717755 systemd[1]: Started cri-containerd-aaaed888db27bab62284a90565a4a397a84976be18150f872347a92f5b4bc938.scope - libcontainer container aaaed888db27bab62284a90565a4a397a84976be18150f872347a92f5b4bc938. Mar 14 00:09:55.758890 containerd[1479]: time="2026-03-14T00:09:55.758843939Z" level=info msg="StartContainer for \"aaaed888db27bab62284a90565a4a397a84976be18150f872347a92f5b4bc938\" returns successfully" Mar 14 00:09:56.192505 kubelet[2609]: E0314 00:09:56.192311 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.192505 kubelet[2609]: W0314 00:09:56.192336 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.192505 kubelet[2609]: E0314 00:09:56.192394 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.193330 kubelet[2609]: E0314 00:09:56.193071 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.193330 kubelet[2609]: W0314 00:09:56.193083 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.193330 kubelet[2609]: E0314 00:09:56.193096 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.194112 kubelet[2609]: E0314 00:09:56.193868 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.194112 kubelet[2609]: W0314 00:09:56.193882 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.194112 kubelet[2609]: E0314 00:09:56.193894 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.196586 kubelet[2609]: E0314 00:09:56.196387 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.196586 kubelet[2609]: W0314 00:09:56.196416 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.196586 kubelet[2609]: E0314 00:09:56.196436 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.197185 kubelet[2609]: E0314 00:09:56.197162 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.197185 kubelet[2609]: W0314 00:09:56.197177 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.197274 kubelet[2609]: E0314 00:09:56.197189 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.197357 kubelet[2609]: E0314 00:09:56.197345 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.197357 kubelet[2609]: W0314 00:09:56.197355 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.197444 kubelet[2609]: E0314 00:09:56.197364 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.197561 kubelet[2609]: E0314 00:09:56.197494 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.197561 kubelet[2609]: W0314 00:09:56.197504 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.197561 kubelet[2609]: E0314 00:09:56.197512 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.197937 kubelet[2609]: E0314 00:09:56.197922 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.197937 kubelet[2609]: W0314 00:09:56.197937 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.198132 kubelet[2609]: E0314 00:09:56.197956 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.198334 kubelet[2609]: E0314 00:09:56.198320 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.198334 kubelet[2609]: W0314 00:09:56.198334 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.198449 kubelet[2609]: E0314 00:09:56.198344 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.198799 kubelet[2609]: E0314 00:09:56.198783 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.198799 kubelet[2609]: W0314 00:09:56.198799 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.198906 kubelet[2609]: E0314 00:09:56.198810 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.199115 kubelet[2609]: E0314 00:09:56.199102 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.199115 kubelet[2609]: W0314 00:09:56.199115 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.199190 kubelet[2609]: E0314 00:09:56.199125 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.199448 kubelet[2609]: E0314 00:09:56.199431 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.199448 kubelet[2609]: W0314 00:09:56.199449 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.199520 kubelet[2609]: E0314 00:09:56.199459 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.199947 kubelet[2609]: E0314 00:09:56.199872 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.199947 kubelet[2609]: W0314 00:09:56.199886 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.199947 kubelet[2609]: E0314 00:09:56.199897 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.200583 kubelet[2609]: E0314 00:09:56.200114 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.200583 kubelet[2609]: W0314 00:09:56.200127 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.200583 kubelet[2609]: E0314 00:09:56.200137 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.200583 kubelet[2609]: E0314 00:09:56.200302 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.200583 kubelet[2609]: W0314 00:09:56.200311 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.200583 kubelet[2609]: E0314 00:09:56.200319 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.225875 kubelet[2609]: E0314 00:09:56.225782 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.225875 kubelet[2609]: W0314 00:09:56.225816 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.225875 kubelet[2609]: E0314 00:09:56.225857 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.226554 kubelet[2609]: E0314 00:09:56.226519 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.226651 kubelet[2609]: W0314 00:09:56.226576 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.226651 kubelet[2609]: E0314 00:09:56.226597 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.227293 kubelet[2609]: E0314 00:09:56.227207 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.227293 kubelet[2609]: W0314 00:09:56.227229 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.227401 kubelet[2609]: E0314 00:09:56.227302 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.227649 kubelet[2609]: E0314 00:09:56.227632 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.227649 kubelet[2609]: W0314 00:09:56.227649 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.227743 kubelet[2609]: E0314 00:09:56.227662 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.227926 kubelet[2609]: E0314 00:09:56.227908 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.227926 kubelet[2609]: W0314 00:09:56.227925 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.228009 kubelet[2609]: E0314 00:09:56.227938 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.228194 kubelet[2609]: E0314 00:09:56.228178 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.228194 kubelet[2609]: W0314 00:09:56.228194 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.228280 kubelet[2609]: E0314 00:09:56.228205 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.228708 kubelet[2609]: E0314 00:09:56.228691 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.228708 kubelet[2609]: W0314 00:09:56.228709 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.228803 kubelet[2609]: E0314 00:09:56.228725 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.228992 kubelet[2609]: E0314 00:09:56.228976 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.228992 kubelet[2609]: W0314 00:09:56.228991 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.229071 kubelet[2609]: E0314 00:09:56.229002 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.229251 kubelet[2609]: E0314 00:09:56.229240 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.229251 kubelet[2609]: W0314 00:09:56.229251 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.229315 kubelet[2609]: E0314 00:09:56.229260 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.229476 kubelet[2609]: E0314 00:09:56.229462 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.229476 kubelet[2609]: W0314 00:09:56.229474 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.229605 kubelet[2609]: E0314 00:09:56.229487 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.229771 kubelet[2609]: E0314 00:09:56.229759 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.229771 kubelet[2609]: W0314 00:09:56.229770 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.229868 kubelet[2609]: E0314 00:09:56.229779 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.230043 kubelet[2609]: E0314 00:09:56.229989 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.230043 kubelet[2609]: W0314 00:09:56.230004 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.230043 kubelet[2609]: E0314 00:09:56.230013 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.230523 kubelet[2609]: E0314 00:09:56.230503 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.230523 kubelet[2609]: W0314 00:09:56.230516 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.230523 kubelet[2609]: E0314 00:09:56.230527 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.231137 kubelet[2609]: E0314 00:09:56.231118 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.231137 kubelet[2609]: W0314 00:09:56.231133 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.231288 kubelet[2609]: E0314 00:09:56.231144 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.231356 kubelet[2609]: E0314 00:09:56.231328 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.231356 kubelet[2609]: W0314 00:09:56.231337 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.231356 kubelet[2609]: E0314 00:09:56.231345 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.231753 kubelet[2609]: E0314 00:09:56.231657 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.231753 kubelet[2609]: W0314 00:09:56.231667 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.231753 kubelet[2609]: E0314 00:09:56.231679 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.232266 kubelet[2609]: E0314 00:09:56.232124 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.232266 kubelet[2609]: W0314 00:09:56.232138 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.232266 kubelet[2609]: E0314 00:09:56.232152 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.232625 kubelet[2609]: E0314 00:09:56.232611 2609 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:09:56.232744 kubelet[2609]: W0314 00:09:56.232678 2609 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:09:56.232744 kubelet[2609]: E0314 00:09:56.232692 2609 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:09:56.995897 containerd[1479]: time="2026-03-14T00:09:56.995743457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:56.997398 containerd[1479]: time="2026-03-14T00:09:56.997350973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 14 00:09:57.000519 containerd[1479]: time="2026-03-14T00:09:56.999256003Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:57.002205 containerd[1479]: time="2026-03-14T00:09:57.001788028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:09:57.002781 containerd[1479]: time="2026-03-14T00:09:57.002743370Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.356009282s" Mar 14 00:09:57.002868 containerd[1479]: time="2026-03-14T00:09:57.002780725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 14 00:09:57.009892 containerd[1479]: time="2026-03-14T00:09:57.009849705Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:09:57.028789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305154498.mount: Deactivated successfully. Mar 14 00:09:57.029907 containerd[1479]: time="2026-03-14T00:09:57.029843581Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6\"" Mar 14 00:09:57.031960 containerd[1479]: time="2026-03-14T00:09:57.031930640Z" level=info msg="StartContainer for \"9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6\"" Mar 14 00:09:57.046096 kubelet[2609]: E0314 00:09:57.045231 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:09:57.072755 systemd[1]: Started cri-containerd-9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6.scope - libcontainer container 9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6. Mar 14 00:09:57.102959 containerd[1479]: time="2026-03-14T00:09:57.102859648Z" level=info msg="StartContainer for \"9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6\" returns successfully" Mar 14 00:09:57.120349 systemd[1]: cri-containerd-9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6.scope: Deactivated successfully. Mar 14 00:09:57.183108 kubelet[2609]: I0314 00:09:57.183062 2609 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:09:57.212995 kubelet[2609]: I0314 00:09:57.209514 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-cb7db68f7-krhtm" podStartSLOduration=2.407691668 podStartE2EDuration="4.209499424s" podCreationTimestamp="2026-03-14 00:09:53 +0000 UTC" firstStartedPulling="2026-03-14 00:09:53.842862463 +0000 UTC m=+23.912732314" lastFinishedPulling="2026-03-14 00:09:55.644670219 +0000 UTC m=+25.714540070" observedRunningTime="2026-03-14 00:09:56.195446486 +0000 UTC m=+26.265316337" watchObservedRunningTime="2026-03-14 00:09:57.209499424 +0000 UTC m=+27.279369275" Mar 14 00:09:57.256732 containerd[1479]: time="2026-03-14T00:09:57.255984438Z" level=info msg="shim disconnected" id=9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6 namespace=k8s.io Mar 14 00:09:57.256946 containerd[1479]: time="2026-03-14T00:09:57.256915424Z" level=warning msg="cleaning up after shim disconnected" id=9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6 namespace=k8s.io Mar 14 00:09:57.257014 containerd[1479]: time="2026-03-14T00:09:57.256998972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:09:57.653961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e2d2287e49c7a668883a0bf008ca05f030d5125d7054e6593e25c8d9e5e52b6-rootfs.mount: Deactivated successfully. Mar 14 00:09:58.193707 containerd[1479]: time="2026-03-14T00:09:58.193656179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:09:59.045731 kubelet[2609]: E0314 00:09:59.045652 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:01.045576 kubelet[2609]: E0314 00:10:01.045230 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:02.159324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190264505.mount: Deactivated successfully. Mar 14 00:10:02.184178 containerd[1479]: time="2026-03-14T00:10:02.183378109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:02.184657 containerd[1479]: time="2026-03-14T00:10:02.184626213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 14 00:10:02.185720 containerd[1479]: time="2026-03-14T00:10:02.185691016Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:02.189701 containerd[1479]: time="2026-03-14T00:10:02.189632064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:02.190312 containerd[1479]: time="2026-03-14T00:10:02.190281113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 3.994549378s" Mar 14 00:10:02.190407 containerd[1479]: time="2026-03-14T00:10:02.190391741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 14 00:10:02.195517 containerd[1479]: time="2026-03-14T00:10:02.195471384Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:10:02.221787 containerd[1479]: time="2026-03-14T00:10:02.221734226Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78\"" Mar 14 00:10:02.223069 containerd[1479]: time="2026-03-14T00:10:02.222999488Z" level=info msg="StartContainer for \"04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78\"" Mar 14 00:10:02.253811 systemd[1]: run-containerd-runc-k8s.io-04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78-runc.2fHOhP.mount: Deactivated successfully. Mar 14 00:10:02.264081 systemd[1]: Started cri-containerd-04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78.scope - libcontainer container 04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78. Mar 14 00:10:02.303122 containerd[1479]: time="2026-03-14T00:10:02.303026359Z" level=info msg="StartContainer for \"04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78\" returns successfully" Mar 14 00:10:02.410516 systemd[1]: cri-containerd-04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78.scope: Deactivated successfully. Mar 14 00:10:02.577220 containerd[1479]: time="2026-03-14T00:10:02.577131882Z" level=info msg="shim disconnected" id=04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78 namespace=k8s.io Mar 14 00:10:02.577220 containerd[1479]: time="2026-03-14T00:10:02.577205834Z" level=warning msg="cleaning up after shim disconnected" id=04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78 namespace=k8s.io Mar 14 00:10:02.577220 containerd[1479]: time="2026-03-14T00:10:02.577218753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:10:02.589579 containerd[1479]: time="2026-03-14T00:10:02.589039537Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:10:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:10:03.045408 kubelet[2609]: E0314 00:10:03.045324 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:03.162505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04608c9cc4618db9772a11a82b372d54efb1a67a3cc0830fad7527c3d047ff78-rootfs.mount: Deactivated successfully. Mar 14 00:10:03.207241 containerd[1479]: time="2026-03-14T00:10:03.206350389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:10:05.045820 kubelet[2609]: E0314 00:10:05.045710 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:07.045827 kubelet[2609]: E0314 00:10:07.045709 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:07.269728 kubelet[2609]: I0314 00:10:07.268111 2609 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:10:07.488605 containerd[1479]: time="2026-03-14T00:10:07.488517230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:07.490891 containerd[1479]: time="2026-03-14T00:10:07.490835484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 14 00:10:07.491465 containerd[1479]: time="2026-03-14T00:10:07.491418318Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:07.495598 containerd[1479]: time="2026-03-14T00:10:07.495554387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:07.497035 containerd[1479]: time="2026-03-14T00:10:07.496983713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.290584168s" Mar 14 00:10:07.497035 containerd[1479]: time="2026-03-14T00:10:07.497032669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 14 00:10:07.503830 containerd[1479]: time="2026-03-14T00:10:07.503777489Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:10:07.518946 containerd[1479]: time="2026-03-14T00:10:07.518871802Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91\"" Mar 14 00:10:07.520725 containerd[1479]: time="2026-03-14T00:10:07.520680817Z" level=info msg="StartContainer for \"361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91\"" Mar 14 00:10:07.557905 systemd[1]: Started cri-containerd-361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91.scope - libcontainer container 361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91. Mar 14 00:10:07.588317 containerd[1479]: time="2026-03-14T00:10:07.588192297Z" level=info msg="StartContainer for \"361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91\" returns successfully" Mar 14 00:10:08.117910 kubelet[2609]: I0314 00:10:08.117869 2609 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:10:08.122739 systemd[1]: cri-containerd-361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91.scope: Deactivated successfully. Mar 14 00:10:08.153329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91-rootfs.mount: Deactivated successfully. Mar 14 00:10:08.243903 systemd[1]: Created slice kubepods-besteffort-pod01b2b6b9_d251_4025_a4b5_f21cd42a4542.slice - libcontainer container kubepods-besteffort-pod01b2b6b9_d251_4025_a4b5_f21cd42a4542.slice. Mar 14 00:10:08.248585 containerd[1479]: time="2026-03-14T00:10:08.248231265Z" level=info msg="shim disconnected" id=361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91 namespace=k8s.io Mar 14 00:10:08.248585 containerd[1479]: time="2026-03-14T00:10:08.248286061Z" level=warning msg="cleaning up after shim disconnected" id=361e5a0b33bf969911a02cfa78e1fc80fdf46e22b04b710bf1ca40075cf79e91 namespace=k8s.io Mar 14 00:10:08.248585 containerd[1479]: time="2026-03-14T00:10:08.248296940Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:10:08.263450 systemd[1]: Created slice kubepods-burstable-pod750010d3_d177_45c1_adc7_1d676ed1917e.slice - libcontainer container kubepods-burstable-pod750010d3_d177_45c1_adc7_1d676ed1917e.slice. Mar 14 00:10:08.275641 systemd[1]: Created slice kubepods-besteffort-pod2e989e9d_2731_40ad_ae30_02ef1e6a05b7.slice - libcontainer container kubepods-besteffort-pod2e989e9d_2731_40ad_ae30_02ef1e6a05b7.slice. Mar 14 00:10:08.281263 containerd[1479]: time="2026-03-14T00:10:08.281211564Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:10:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:10:08.283822 systemd[1]: Created slice kubepods-besteffort-podb57fb4ff_b196_4410_bd4a_7db8a81fb987.slice - libcontainer container kubepods-besteffort-podb57fb4ff_b196_4410_bd4a_7db8a81fb987.slice. Mar 14 00:10:08.293998 systemd[1]: Created slice kubepods-besteffort-pod34f9a0e8_7ea2_403e_b90c_0c3c742508fa.slice - libcontainer container kubepods-besteffort-pod34f9a0e8_7ea2_403e_b90c_0c3c742508fa.slice. Mar 14 00:10:08.304667 systemd[1]: Created slice kubepods-besteffort-pod5008851d_8ee3_475b_854f_0622bf490d26.slice - libcontainer container kubepods-besteffort-pod5008851d_8ee3_475b_854f_0622bf490d26.slice. Mar 14 00:10:08.311437 systemd[1]: Created slice kubepods-burstable-pod67c2f56e_bd26_464f_9efc_470b8df89e0c.slice - libcontainer container kubepods-burstable-pod67c2f56e_bd26_464f_9efc_470b8df89e0c.slice. Mar 14 00:10:08.319095 kubelet[2609]: I0314 00:10:08.319046 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ztf\" (UniqueName: \"kubernetes.io/projected/750010d3-d177-45c1-adc7-1d676ed1917e-kube-api-access-m8ztf\") pod \"coredns-7d764666f9-lqkzl\" (UID: \"750010d3-d177-45c1-adc7-1d676ed1917e\") " pod="kube-system/coredns-7d764666f9-lqkzl" Mar 14 00:10:08.319523 kubelet[2609]: I0314 00:10:08.319382 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxllh\" (UniqueName: \"kubernetes.io/projected/2e989e9d-2731-40ad-ae30-02ef1e6a05b7-kube-api-access-cxllh\") pod \"calico-kube-controllers-fff8f6d65-6s5j2\" (UID: \"2e989e9d-2731-40ad-ae30-02ef1e6a05b7\") " pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" Mar 14 00:10:08.319721 kubelet[2609]: I0314 00:10:08.319693 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgjvr\" (UniqueName: \"kubernetes.io/projected/34f9a0e8-7ea2-403e-b90c-0c3c742508fa-kube-api-access-sgjvr\") pod \"goldmane-9f7667bb8-5kqn9\" (UID: \"34f9a0e8-7ea2-403e-b90c-0c3c742508fa\") " pod="calico-system/goldmane-9f7667bb8-5kqn9" Mar 14 00:10:08.320454 kubelet[2609]: I0314 00:10:08.319819 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-whisker-ca-bundle\") pod \"whisker-5b75858f74-6sk86\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " pod="calico-system/whisker-5b75858f74-6sk86" Mar 14 00:10:08.320454 kubelet[2609]: I0314 00:10:08.319895 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx8b2\" (UniqueName: \"kubernetes.io/projected/01b2b6b9-d251-4025-a4b5-f21cd42a4542-kube-api-access-jx8b2\") pod \"calico-apiserver-677d8bcf85-q78r4\" (UID: \"01b2b6b9-d251-4025-a4b5-f21cd42a4542\") " pod="calico-system/calico-apiserver-677d8bcf85-q78r4" Mar 14 00:10:08.320454 kubelet[2609]: I0314 00:10:08.319927 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b57fb4ff-b196-4410-bd4a-7db8a81fb987-calico-apiserver-certs\") pod \"calico-apiserver-677d8bcf85-kwfpb\" (UID: \"b57fb4ff-b196-4410-bd4a-7db8a81fb987\") " pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" Mar 14 00:10:08.320454 kubelet[2609]: I0314 00:10:08.319965 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34f9a0e8-7ea2-403e-b90c-0c3c742508fa-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-5kqn9\" (UID: \"34f9a0e8-7ea2-403e-b90c-0c3c742508fa\") " pod="calico-system/goldmane-9f7667bb8-5kqn9" Mar 14 00:10:08.320454 kubelet[2609]: I0314 00:10:08.320313 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm7xt\" (UniqueName: \"kubernetes.io/projected/b57fb4ff-b196-4410-bd4a-7db8a81fb987-kube-api-access-jm7xt\") pod \"calico-apiserver-677d8bcf85-kwfpb\" (UID: \"b57fb4ff-b196-4410-bd4a-7db8a81fb987\") " pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" Mar 14 00:10:08.320805 kubelet[2609]: I0314 00:10:08.320358 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f9a0e8-7ea2-403e-b90c-0c3c742508fa-config\") pod \"goldmane-9f7667bb8-5kqn9\" (UID: \"34f9a0e8-7ea2-403e-b90c-0c3c742508fa\") " pod="calico-system/goldmane-9f7667bb8-5kqn9" Mar 14 00:10:08.320980 kubelet[2609]: I0314 00:10:08.320948 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/01b2b6b9-d251-4025-a4b5-f21cd42a4542-calico-apiserver-certs\") pod \"calico-apiserver-677d8bcf85-q78r4\" (UID: \"01b2b6b9-d251-4025-a4b5-f21cd42a4542\") " pod="calico-system/calico-apiserver-677d8bcf85-q78r4" Mar 14 00:10:08.321137 kubelet[2609]: I0314 00:10:08.321113 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e989e9d-2731-40ad-ae30-02ef1e6a05b7-tigera-ca-bundle\") pod \"calico-kube-controllers-fff8f6d65-6s5j2\" (UID: \"2e989e9d-2731-40ad-ae30-02ef1e6a05b7\") " pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" Mar 14 00:10:08.321295 kubelet[2609]: I0314 00:10:08.321268 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/750010d3-d177-45c1-adc7-1d676ed1917e-config-volume\") pod \"coredns-7d764666f9-lqkzl\" (UID: \"750010d3-d177-45c1-adc7-1d676ed1917e\") " pod="kube-system/coredns-7d764666f9-lqkzl" Mar 14 00:10:08.321453 kubelet[2609]: I0314 00:10:08.321427 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/34f9a0e8-7ea2-403e-b90c-0c3c742508fa-goldmane-key-pair\") pod \"goldmane-9f7667bb8-5kqn9\" (UID: \"34f9a0e8-7ea2-403e-b90c-0c3c742508fa\") " pod="calico-system/goldmane-9f7667bb8-5kqn9" Mar 14 00:10:08.321610 kubelet[2609]: I0314 00:10:08.321583 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-nginx-config\") pod \"whisker-5b75858f74-6sk86\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " pod="calico-system/whisker-5b75858f74-6sk86" Mar 14 00:10:08.321758 kubelet[2609]: I0314 00:10:08.321735 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5008851d-8ee3-475b-854f-0622bf490d26-whisker-backend-key-pair\") pod \"whisker-5b75858f74-6sk86\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " pod="calico-system/whisker-5b75858f74-6sk86" Mar 14 00:10:08.321898 kubelet[2609]: I0314 00:10:08.321875 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c2f56e-bd26-464f-9efc-470b8df89e0c-config-volume\") pod \"coredns-7d764666f9-m9tx7\" (UID: \"67c2f56e-bd26-464f-9efc-470b8df89e0c\") " pod="kube-system/coredns-7d764666f9-m9tx7" Mar 14 00:10:08.322041 kubelet[2609]: I0314 00:10:08.322018 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmj7l\" (UniqueName: \"kubernetes.io/projected/67c2f56e-bd26-464f-9efc-470b8df89e0c-kube-api-access-jmj7l\") pod \"coredns-7d764666f9-m9tx7\" (UID: \"67c2f56e-bd26-464f-9efc-470b8df89e0c\") " pod="kube-system/coredns-7d764666f9-m9tx7" Mar 14 00:10:08.322176 kubelet[2609]: I0314 00:10:08.322156 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8brcp\" (UniqueName: \"kubernetes.io/projected/5008851d-8ee3-475b-854f-0622bf490d26-kube-api-access-8brcp\") pod \"whisker-5b75858f74-6sk86\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " pod="calico-system/whisker-5b75858f74-6sk86" Mar 14 00:10:08.559730 containerd[1479]: time="2026-03-14T00:10:08.559647668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-q78r4,Uid:01b2b6b9-d251-4025-a4b5-f21cd42a4542,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:08.574079 containerd[1479]: time="2026-03-14T00:10:08.574018596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lqkzl,Uid:750010d3-d177-45c1-adc7-1d676ed1917e,Namespace:kube-system,Attempt:0,}" Mar 14 00:10:08.586595 containerd[1479]: time="2026-03-14T00:10:08.586064697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fff8f6d65-6s5j2,Uid:2e989e9d-2731-40ad-ae30-02ef1e6a05b7,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:08.595055 containerd[1479]: time="2026-03-14T00:10:08.595019109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-kwfpb,Uid:b57fb4ff-b196-4410-bd4a-7db8a81fb987,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:08.610429 containerd[1479]: time="2026-03-14T00:10:08.610317207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5kqn9,Uid:34f9a0e8-7ea2-403e-b90c-0c3c742508fa,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:08.615966 containerd[1479]: time="2026-03-14T00:10:08.615908390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b75858f74-6sk86,Uid:5008851d-8ee3-475b-854f-0622bf490d26,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:08.621555 containerd[1479]: time="2026-03-14T00:10:08.621293508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-m9tx7,Uid:67c2f56e-bd26-464f-9efc-470b8df89e0c,Namespace:kube-system,Attempt:0,}" Mar 14 00:10:08.766449 containerd[1479]: time="2026-03-14T00:10:08.766294009Z" level=error msg="Failed to destroy network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.766842 containerd[1479]: time="2026-03-14T00:10:08.766814290Z" level=error msg="encountered an error cleaning up failed sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.767081 containerd[1479]: time="2026-03-14T00:10:08.766979277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fff8f6d65-6s5j2,Uid:2e989e9d-2731-40ad-ae30-02ef1e6a05b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.767579 kubelet[2609]: E0314 00:10:08.767276 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.767579 kubelet[2609]: E0314 00:10:08.767344 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" Mar 14 00:10:08.767579 kubelet[2609]: E0314 00:10:08.767386 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" Mar 14 00:10:08.767716 kubelet[2609]: E0314 00:10:08.767439 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fff8f6d65-6s5j2_calico-system(2e989e9d-2731-40ad-ae30-02ef1e6a05b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fff8f6d65-6s5j2_calico-system(2e989e9d-2731-40ad-ae30-02ef1e6a05b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" podUID="2e989e9d-2731-40ad-ae30-02ef1e6a05b7" Mar 14 00:10:08.774091 containerd[1479]: time="2026-03-14T00:10:08.773914400Z" level=error msg="Failed to destroy network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.774556 containerd[1479]: time="2026-03-14T00:10:08.774466959Z" level=error msg="encountered an error cleaning up failed sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.774660 containerd[1479]: time="2026-03-14T00:10:08.774638146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-q78r4,Uid:01b2b6b9-d251-4025-a4b5-f21cd42a4542,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.775033 kubelet[2609]: E0314 00:10:08.774980 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.775115 kubelet[2609]: E0314 00:10:08.775053 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677d8bcf85-q78r4" Mar 14 00:10:08.775115 kubelet[2609]: E0314 00:10:08.775071 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677d8bcf85-q78r4" Mar 14 00:10:08.775416 kubelet[2609]: E0314 00:10:08.775332 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677d8bcf85-q78r4_calico-system(01b2b6b9-d251-4025-a4b5-f21cd42a4542)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677d8bcf85-q78r4_calico-system(01b2b6b9-d251-4025-a4b5-f21cd42a4542)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677d8bcf85-q78r4" podUID="01b2b6b9-d251-4025-a4b5-f21cd42a4542" Mar 14 00:10:08.818974 containerd[1479]: time="2026-03-14T00:10:08.817712452Z" level=error msg="Failed to destroy network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.818974 containerd[1479]: time="2026-03-14T00:10:08.818791371Z" level=error msg="encountered an error cleaning up failed sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.818974 containerd[1479]: time="2026-03-14T00:10:08.818853367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lqkzl,Uid:750010d3-d177-45c1-adc7-1d676ed1917e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.819246 kubelet[2609]: E0314 00:10:08.819201 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.819304 kubelet[2609]: E0314 00:10:08.819258 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-lqkzl" Mar 14 00:10:08.819304 kubelet[2609]: E0314 00:10:08.819276 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-lqkzl" Mar 14 00:10:08.821258 kubelet[2609]: E0314 00:10:08.819709 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-lqkzl_kube-system(750010d3-d177-45c1-adc7-1d676ed1917e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-lqkzl_kube-system(750010d3-d177-45c1-adc7-1d676ed1917e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-lqkzl" podUID="750010d3-d177-45c1-adc7-1d676ed1917e" Mar 14 00:10:08.838885 containerd[1479]: time="2026-03-14T00:10:08.838832276Z" level=error msg="Failed to destroy network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.839026 containerd[1479]: time="2026-03-14T00:10:08.838988424Z" level=error msg="Failed to destroy network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.839563 containerd[1479]: time="2026-03-14T00:10:08.839291442Z" level=error msg="encountered an error cleaning up failed sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.839563 containerd[1479]: time="2026-03-14T00:10:08.839389954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-m9tx7,Uid:67c2f56e-bd26-464f-9efc-470b8df89e0c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.839711 kubelet[2609]: E0314 00:10:08.839622 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.839711 kubelet[2609]: E0314 00:10:08.839703 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-m9tx7" Mar 14 00:10:08.839975 kubelet[2609]: E0314 00:10:08.839722 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-m9tx7" Mar 14 00:10:08.839975 kubelet[2609]: E0314 00:10:08.839773 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-m9tx7_kube-system(67c2f56e-bd26-464f-9efc-470b8df89e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-m9tx7_kube-system(67c2f56e-bd26-464f-9efc-470b8df89e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-m9tx7" podUID="67c2f56e-bd26-464f-9efc-470b8df89e0c" Mar 14 00:10:08.841959 containerd[1479]: time="2026-03-14T00:10:08.841802334Z" level=error msg="encountered an error cleaning up failed sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.841959 containerd[1479]: time="2026-03-14T00:10:08.841875249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-kwfpb,Uid:b57fb4ff-b196-4410-bd4a-7db8a81fb987,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.843373 kubelet[2609]: E0314 00:10:08.843155 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.843373 kubelet[2609]: E0314 00:10:08.843244 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" Mar 14 00:10:08.843373 kubelet[2609]: E0314 00:10:08.843266 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" Mar 14 00:10:08.845217 kubelet[2609]: E0314 00:10:08.843376 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-677d8bcf85-kwfpb_calico-system(b57fb4ff-b196-4410-bd4a-7db8a81fb987)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-677d8bcf85-kwfpb_calico-system(b57fb4ff-b196-4410-bd4a-7db8a81fb987)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" podUID="b57fb4ff-b196-4410-bd4a-7db8a81fb987" Mar 14 00:10:08.855552 containerd[1479]: time="2026-03-14T00:10:08.855497112Z" level=error msg="Failed to destroy network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.856002 containerd[1479]: time="2026-03-14T00:10:08.855974037Z" level=error msg="encountered an error cleaning up failed sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.856133 containerd[1479]: time="2026-03-14T00:10:08.856111187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5kqn9,Uid:34f9a0e8-7ea2-403e-b90c-0c3c742508fa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.856502 kubelet[2609]: E0314 00:10:08.856466 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.856593 kubelet[2609]: E0314 00:10:08.856521 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5kqn9" Mar 14 00:10:08.857224 kubelet[2609]: E0314 00:10:08.857082 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-5kqn9" Mar 14 00:10:08.857224 kubelet[2609]: E0314 00:10:08.857170 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-5kqn9_calico-system(34f9a0e8-7ea2-403e-b90c-0c3c742508fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-5kqn9_calico-system(34f9a0e8-7ea2-403e-b90c-0c3c742508fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5kqn9" podUID="34f9a0e8-7ea2-403e-b90c-0c3c742508fa" Mar 14 00:10:08.864924 containerd[1479]: time="2026-03-14T00:10:08.864797658Z" level=error msg="Failed to destroy network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.865385 containerd[1479]: time="2026-03-14T00:10:08.865230146Z" level=error msg="encountered an error cleaning up failed sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.865385 containerd[1479]: time="2026-03-14T00:10:08.865281462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b75858f74-6sk86,Uid:5008851d-8ee3-475b-854f-0622bf490d26,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.865726 kubelet[2609]: E0314 00:10:08.865657 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:08.865726 kubelet[2609]: E0314 00:10:08.865714 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b75858f74-6sk86" Mar 14 00:10:08.865893 kubelet[2609]: E0314 00:10:08.865734 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5b75858f74-6sk86" Mar 14 00:10:08.865893 kubelet[2609]: E0314 00:10:08.865785 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5b75858f74-6sk86_calico-system(5008851d-8ee3-475b-854f-0622bf490d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5b75858f74-6sk86_calico-system(5008851d-8ee3-475b-854f-0622bf490d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b75858f74-6sk86" podUID="5008851d-8ee3-475b-854f-0622bf490d26" Mar 14 00:10:09.053806 systemd[1]: Created slice kubepods-besteffort-pod668745bf_4f34_407b_807f_7c7e77d971d9.slice - libcontainer container kubepods-besteffort-pod668745bf_4f34_407b_807f_7c7e77d971d9.slice. Mar 14 00:10:09.059091 containerd[1479]: time="2026-03-14T00:10:09.059054743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qb2mw,Uid:668745bf-4f34-407b-807f-7c7e77d971d9,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:09.133300 containerd[1479]: time="2026-03-14T00:10:09.132640755Z" level=error msg="Failed to destroy network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.133890 containerd[1479]: time="2026-03-14T00:10:09.133697042Z" level=error msg="encountered an error cleaning up failed sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.134295 containerd[1479]: time="2026-03-14T00:10:09.133990502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qb2mw,Uid:668745bf-4f34-407b-807f-7c7e77d971d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.134771 kubelet[2609]: E0314 00:10:09.134725 2609 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.135093 kubelet[2609]: E0314 00:10:09.134800 2609 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:10:09.135093 kubelet[2609]: E0314 00:10:09.134836 2609 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qb2mw" Mar 14 00:10:09.135270 kubelet[2609]: E0314 00:10:09.134921 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qb2mw_calico-system(668745bf-4f34-407b-807f-7c7e77d971d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qb2mw_calico-system(668745bf-4f34-407b-807f-7c7e77d971d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:09.225658 kubelet[2609]: I0314 00:10:09.225586 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:09.228885 kubelet[2609]: I0314 00:10:09.227402 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:09.229005 containerd[1479]: time="2026-03-14T00:10:09.228937031Z" level=info msg="StopPodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\"" Mar 14 00:10:09.229158 containerd[1479]: time="2026-03-14T00:10:09.229132538Z" level=info msg="Ensure that sandbox d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f in task-service has been cleanup successfully" Mar 14 00:10:09.230060 containerd[1479]: time="2026-03-14T00:10:09.229498832Z" level=info msg="StopPodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\"" Mar 14 00:10:09.230060 containerd[1479]: time="2026-03-14T00:10:09.229644982Z" level=info msg="Ensure that sandbox 711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a in task-service has been cleanup successfully" Mar 14 00:10:09.233065 kubelet[2609]: I0314 00:10:09.233044 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:09.233872 containerd[1479]: time="2026-03-14T00:10:09.233786735Z" level=info msg="StopPodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\"" Mar 14 00:10:09.234273 containerd[1479]: time="2026-03-14T00:10:09.234251422Z" level=info msg="Ensure that sandbox dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938 in task-service has been cleanup successfully" Mar 14 00:10:09.235689 kubelet[2609]: I0314 00:10:09.235431 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:09.237224 containerd[1479]: time="2026-03-14T00:10:09.236769768Z" level=info msg="StopPodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\"" Mar 14 00:10:09.237224 containerd[1479]: time="2026-03-14T00:10:09.236922917Z" level=info msg="Ensure that sandbox 76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890 in task-service has been cleanup successfully" Mar 14 00:10:09.240386 kubelet[2609]: I0314 00:10:09.240358 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:09.242194 containerd[1479]: time="2026-03-14T00:10:09.241816337Z" level=info msg="StopPodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\"" Mar 14 00:10:09.243010 containerd[1479]: time="2026-03-14T00:10:09.242788310Z" level=info msg="Ensure that sandbox cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd in task-service has been cleanup successfully" Mar 14 00:10:09.248779 kubelet[2609]: I0314 00:10:09.248628 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:09.249920 containerd[1479]: time="2026-03-14T00:10:09.249764426Z" level=info msg="StopPodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\"" Mar 14 00:10:09.249997 containerd[1479]: time="2026-03-14T00:10:09.249931334Z" level=info msg="Ensure that sandbox 74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a in task-service has been cleanup successfully" Mar 14 00:10:09.279773 kubelet[2609]: I0314 00:10:09.279731 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:09.282880 containerd[1479]: time="2026-03-14T00:10:09.282788293Z" level=info msg="StopPodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\"" Mar 14 00:10:09.289753 containerd[1479]: time="2026-03-14T00:10:09.289671496Z" level=info msg="Ensure that sandbox e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d in task-service has been cleanup successfully" Mar 14 00:10:09.314889 containerd[1479]: time="2026-03-14T00:10:09.314801231Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:10:09.315201 kubelet[2609]: I0314 00:10:09.315179 2609 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:09.318673 containerd[1479]: time="2026-03-14T00:10:09.318633245Z" level=info msg="StopPodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\"" Mar 14 00:10:09.319384 containerd[1479]: time="2026-03-14T00:10:09.318827112Z" level=info msg="Ensure that sandbox 137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e in task-service has been cleanup successfully" Mar 14 00:10:09.354967 containerd[1479]: time="2026-03-14T00:10:09.354915407Z" level=error msg="StopPodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" failed" error="failed to destroy network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.356891 kubelet[2609]: E0314 00:10:09.356702 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:09.356891 kubelet[2609]: E0314 00:10:09.356766 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938"} Mar 14 00:10:09.356891 kubelet[2609]: E0314 00:10:09.356818 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b57fb4ff-b196-4410-bd4a-7db8a81fb987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.356891 kubelet[2609]: E0314 00:10:09.356843 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b57fb4ff-b196-4410-bd4a-7db8a81fb987\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" podUID="b57fb4ff-b196-4410-bd4a-7db8a81fb987" Mar 14 00:10:09.379075 containerd[1479]: time="2026-03-14T00:10:09.379016814Z" level=info msg="CreateContainer within sandbox \"696eab39f089823575166ecec3409cf6ee13138e9e181cdee7ae03f561a8d9a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f11acce38cc2f54d1013e68c7ccffdefdf0495ad90f5e9dd3c1c889254587be0\"" Mar 14 00:10:09.380793 containerd[1479]: time="2026-03-14T00:10:09.380635982Z" level=info msg="StartContainer for \"f11acce38cc2f54d1013e68c7ccffdefdf0495ad90f5e9dd3c1c889254587be0\"" Mar 14 00:10:09.387588 containerd[1479]: time="2026-03-14T00:10:09.387431870Z" level=error msg="StopPodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" failed" error="failed to destroy network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.388652 kubelet[2609]: E0314 00:10:09.387648 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:09.388652 kubelet[2609]: E0314 00:10:09.387692 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a"} Mar 14 00:10:09.388652 kubelet[2609]: E0314 00:10:09.387724 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5008851d-8ee3-475b-854f-0622bf490d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.388652 kubelet[2609]: E0314 00:10:09.387749 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5008851d-8ee3-475b-854f-0622bf490d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5b75858f74-6sk86" podUID="5008851d-8ee3-475b-854f-0622bf490d26" Mar 14 00:10:09.394094 containerd[1479]: time="2026-03-14T00:10:09.393747752Z" level=error msg="StopPodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" failed" error="failed to destroy network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.394202 kubelet[2609]: E0314 00:10:09.393960 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:09.394202 kubelet[2609]: E0314 00:10:09.394000 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f"} Mar 14 00:10:09.394202 kubelet[2609]: E0314 00:10:09.394029 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"668745bf-4f34-407b-807f-7c7e77d971d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.394202 kubelet[2609]: E0314 00:10:09.394054 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"668745bf-4f34-407b-807f-7c7e77d971d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qb2mw" podUID="668745bf-4f34-407b-807f-7c7e77d971d9" Mar 14 00:10:09.395531 containerd[1479]: time="2026-03-14T00:10:09.394591973Z" level=error msg="StopPodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" failed" error="failed to destroy network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.395743 kubelet[2609]: E0314 00:10:09.395702 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:09.396223 kubelet[2609]: E0314 00:10:09.395821 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890"} Mar 14 00:10:09.396405 kubelet[2609]: E0314 00:10:09.396363 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"750010d3-d177-45c1-adc7-1d676ed1917e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.396714 kubelet[2609]: E0314 00:10:09.396686 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"750010d3-d177-45c1-adc7-1d676ed1917e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-lqkzl" podUID="750010d3-d177-45c1-adc7-1d676ed1917e" Mar 14 00:10:09.397686 containerd[1479]: time="2026-03-14T00:10:09.397633562Z" level=error msg="StopPodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" failed" error="failed to destroy network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.397940 kubelet[2609]: E0314 00:10:09.397913 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:09.398080 kubelet[2609]: E0314 00:10:09.398064 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a"} Mar 14 00:10:09.398185 kubelet[2609]: E0314 00:10:09.398166 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34f9a0e8-7ea2-403e-b90c-0c3c742508fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.398830 kubelet[2609]: E0314 00:10:09.398765 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34f9a0e8-7ea2-403e-b90c-0c3c742508fa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-5kqn9" podUID="34f9a0e8-7ea2-403e-b90c-0c3c742508fa" Mar 14 00:10:09.406700 containerd[1479]: time="2026-03-14T00:10:09.406653856Z" level=error msg="StopPodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" failed" error="failed to destroy network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.406910 kubelet[2609]: E0314 00:10:09.406866 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:09.406975 kubelet[2609]: E0314 00:10:09.406908 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd"} Mar 14 00:10:09.406975 kubelet[2609]: E0314 00:10:09.406948 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01b2b6b9-d251-4025-a4b5-f21cd42a4542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.406975 kubelet[2609]: E0314 00:10:09.406971 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01b2b6b9-d251-4025-a4b5-f21cd42a4542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-677d8bcf85-q78r4" podUID="01b2b6b9-d251-4025-a4b5-f21cd42a4542" Mar 14 00:10:09.415099 containerd[1479]: time="2026-03-14T00:10:09.415052073Z" level=error msg="StopPodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" failed" error="failed to destroy network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.415557 kubelet[2609]: E0314 00:10:09.415265 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:09.415557 kubelet[2609]: E0314 00:10:09.415318 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e"} Mar 14 00:10:09.415557 kubelet[2609]: E0314 00:10:09.415346 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e989e9d-2731-40ad-ae30-02ef1e6a05b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.415557 kubelet[2609]: E0314 00:10:09.415373 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e989e9d-2731-40ad-ae30-02ef1e6a05b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" podUID="2e989e9d-2731-40ad-ae30-02ef1e6a05b7" Mar 14 00:10:09.423746 containerd[1479]: time="2026-03-14T00:10:09.423703512Z" level=error msg="StopPodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" failed" error="failed to destroy network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:10:09.424155 kubelet[2609]: E0314 00:10:09.423912 2609 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:09.424155 kubelet[2609]: E0314 00:10:09.423963 2609 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d"} Mar 14 00:10:09.424155 kubelet[2609]: E0314 00:10:09.423994 2609 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67c2f56e-bd26-464f-9efc-470b8df89e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 14 00:10:09.424155 kubelet[2609]: E0314 00:10:09.424019 2609 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67c2f56e-bd26-464f-9efc-470b8df89e0c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-m9tx7" podUID="67c2f56e-bd26-464f-9efc-470b8df89e0c" Mar 14 00:10:09.435340 systemd[1]: Started cri-containerd-f11acce38cc2f54d1013e68c7ccffdefdf0495ad90f5e9dd3c1c889254587be0.scope - libcontainer container f11acce38cc2f54d1013e68c7ccffdefdf0495ad90f5e9dd3c1c889254587be0. Mar 14 00:10:09.468529 containerd[1479]: time="2026-03-14T00:10:09.468481484Z" level=info msg="StartContainer for \"f11acce38cc2f54d1013e68c7ccffdefdf0495ad90f5e9dd3c1c889254587be0\" returns successfully" Mar 14 00:10:09.522985 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890-shm.mount: Deactivated successfully. Mar 14 00:10:09.523093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd-shm.mount: Deactivated successfully. Mar 14 00:10:10.324679 containerd[1479]: time="2026-03-14T00:10:10.324243876Z" level=info msg="StopPodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\"" Mar 14 00:10:10.379574 kubelet[2609]: I0314 00:10:10.378405 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-62zmq" podStartSLOduration=2.025131645 podStartE2EDuration="17.378391311s" podCreationTimestamp="2026-03-14 00:09:53 +0000 UTC" firstStartedPulling="2026-03-14 00:09:53.934764624 +0000 UTC m=+24.004634435" lastFinishedPulling="2026-03-14 00:10:09.28802425 +0000 UTC m=+39.357894101" observedRunningTime="2026-03-14 00:10:10.35851855 +0000 UTC m=+40.428388481" watchObservedRunningTime="2026-03-14 00:10:10.378391311 +0000 UTC m=+40.448261122" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.419 [INFO][3832] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.420 [INFO][3832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" iface="eth0" netns="/var/run/netns/cni-58080c67-ea34-9fda-4798-0a4760254211" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.420 [INFO][3832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" iface="eth0" netns="/var/run/netns/cni-58080c67-ea34-9fda-4798-0a4760254211" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.421 [INFO][3832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" iface="eth0" netns="/var/run/netns/cni-58080c67-ea34-9fda-4798-0a4760254211" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.421 [INFO][3832] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.421 [INFO][3832] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.467 [INFO][3852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.468 [INFO][3852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.468 [INFO][3852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.483 [WARNING][3852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.483 [INFO][3852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.486 [INFO][3852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:10.492213 containerd[1479]: 2026-03-14 00:10:10.489 [INFO][3832] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:10.494424 containerd[1479]: time="2026-03-14T00:10:10.494213416Z" level=info msg="TearDown network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" successfully" Mar 14 00:10:10.494424 containerd[1479]: time="2026-03-14T00:10:10.494273212Z" level=info msg="StopPodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" returns successfully" Mar 14 00:10:10.497548 systemd[1]: run-netns-cni\x2d58080c67\x2dea34\x2d9fda\x2d4798\x2d0a4760254211.mount: Deactivated successfully. Mar 14 00:10:10.545687 kubelet[2609]: I0314 00:10:10.544687 2609 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-whisker-ca-bundle\") pod \"5008851d-8ee3-475b-854f-0622bf490d26\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " Mar 14 00:10:10.545687 kubelet[2609]: I0314 00:10:10.544747 2609 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/5008851d-8ee3-475b-854f-0622bf490d26-kube-api-access-8brcp\" (UniqueName: \"kubernetes.io/projected/5008851d-8ee3-475b-854f-0622bf490d26-kube-api-access-8brcp\") pod \"5008851d-8ee3-475b-854f-0622bf490d26\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " Mar 14 00:10:10.545687 kubelet[2609]: I0314 00:10:10.544779 2609 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-nginx-config\" (UniqueName: \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-nginx-config\") pod \"5008851d-8ee3-475b-854f-0622bf490d26\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " Mar 14 00:10:10.545687 kubelet[2609]: I0314 00:10:10.544812 2609 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/5008851d-8ee3-475b-854f-0622bf490d26-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5008851d-8ee3-475b-854f-0622bf490d26-whisker-backend-key-pair\") pod \"5008851d-8ee3-475b-854f-0622bf490d26\" (UID: \"5008851d-8ee3-475b-854f-0622bf490d26\") " Mar 14 00:10:10.545687 kubelet[2609]: I0314 00:10:10.545368 2609 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-whisker-ca-bundle" pod "5008851d-8ee3-475b-854f-0622bf490d26" (UID: "5008851d-8ee3-475b-854f-0622bf490d26"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:10:10.546790 kubelet[2609]: I0314 00:10:10.546741 2609 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-nginx-config" pod "5008851d-8ee3-475b-854f-0622bf490d26" (UID: "5008851d-8ee3-475b-854f-0622bf490d26"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:10:10.550996 kubelet[2609]: I0314 00:10:10.550790 2609 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5008851d-8ee3-475b-854f-0622bf490d26-kube-api-access-8brcp" pod "5008851d-8ee3-475b-854f-0622bf490d26" (UID: "5008851d-8ee3-475b-854f-0622bf490d26"). InnerVolumeSpecName "kube-api-access-8brcp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:10:10.550996 kubelet[2609]: I0314 00:10:10.550932 2609 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5008851d-8ee3-475b-854f-0622bf490d26-whisker-backend-key-pair" pod "5008851d-8ee3-475b-854f-0622bf490d26" (UID: "5008851d-8ee3-475b-854f-0622bf490d26"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:10:10.552500 systemd[1]: var-lib-kubelet-pods-5008851d\x2d8ee3\x2d475b\x2d854f\x2d0622bf490d26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8brcp.mount: Deactivated successfully. Mar 14 00:10:10.552617 systemd[1]: var-lib-kubelet-pods-5008851d\x2d8ee3\x2d475b\x2d854f\x2d0622bf490d26-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:10:10.645889 kubelet[2609]: I0314 00:10:10.645719 2609 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-whisker-ca-bundle\") on node \"ci-4081-3-6-n-0ed13f424d\" DevicePath \"\"" Mar 14 00:10:10.645889 kubelet[2609]: I0314 00:10:10.645771 2609 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8brcp\" (UniqueName: \"kubernetes.io/projected/5008851d-8ee3-475b-854f-0622bf490d26-kube-api-access-8brcp\") on node \"ci-4081-3-6-n-0ed13f424d\" DevicePath \"\"" Mar 14 00:10:10.645889 kubelet[2609]: I0314 00:10:10.645787 2609 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5008851d-8ee3-475b-854f-0622bf490d26-nginx-config\") on node \"ci-4081-3-6-n-0ed13f424d\" DevicePath \"\"" Mar 14 00:10:10.645889 kubelet[2609]: I0314 00:10:10.645800 2609 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5008851d-8ee3-475b-854f-0622bf490d26-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-0ed13f424d\" DevicePath \"\"" Mar 14 00:10:11.216572 kernel: calico-node[3877]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:10:11.336484 systemd[1]: Removed slice kubepods-besteffort-pod5008851d_8ee3_475b_854f_0622bf490d26.slice - libcontainer container kubepods-besteffort-pod5008851d_8ee3_475b_854f_0622bf490d26.slice. Mar 14 00:10:11.461427 systemd[1]: Created slice kubepods-besteffort-pod4a878f82_9410_4547_ba1e_f192e868d836.slice - libcontainer container kubepods-besteffort-pod4a878f82_9410_4547_ba1e_f192e868d836.slice. Mar 14 00:10:11.553839 kubelet[2609]: I0314 00:10:11.552704 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4a878f82-9410-4547-ba1e-f192e868d836-nginx-config\") pod \"whisker-7c59b89955-dhdxx\" (UID: \"4a878f82-9410-4547-ba1e-f192e868d836\") " pod="calico-system/whisker-7c59b89955-dhdxx" Mar 14 00:10:11.554838 kubelet[2609]: I0314 00:10:11.554496 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4a878f82-9410-4547-ba1e-f192e868d836-whisker-backend-key-pair\") pod \"whisker-7c59b89955-dhdxx\" (UID: \"4a878f82-9410-4547-ba1e-f192e868d836\") " pod="calico-system/whisker-7c59b89955-dhdxx" Mar 14 00:10:11.554838 kubelet[2609]: I0314 00:10:11.554691 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgkj4\" (UniqueName: \"kubernetes.io/projected/4a878f82-9410-4547-ba1e-f192e868d836-kube-api-access-wgkj4\") pod \"whisker-7c59b89955-dhdxx\" (UID: \"4a878f82-9410-4547-ba1e-f192e868d836\") " pod="calico-system/whisker-7c59b89955-dhdxx" Mar 14 00:10:11.554838 kubelet[2609]: I0314 00:10:11.554764 2609 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a878f82-9410-4547-ba1e-f192e868d836-whisker-ca-bundle\") pod \"whisker-7c59b89955-dhdxx\" (UID: \"4a878f82-9410-4547-ba1e-f192e868d836\") " pod="calico-system/whisker-7c59b89955-dhdxx" Mar 14 00:10:11.762196 systemd-networkd[1376]: vxlan.calico: Link UP Mar 14 00:10:11.762208 systemd-networkd[1376]: vxlan.calico: Gained carrier Mar 14 00:10:11.768333 containerd[1479]: time="2026-03-14T00:10:11.767784223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c59b89955-dhdxx,Uid:4a878f82-9410-4547-ba1e-f192e868d836,Namespace:calico-system,Attempt:0,}" Mar 14 00:10:11.962184 systemd-networkd[1376]: calie2007351ea9: Link UP Mar 14 00:10:11.964696 systemd-networkd[1376]: calie2007351ea9: Gained carrier Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.858 [INFO][4049] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0 whisker-7c59b89955- calico-system 4a878f82-9410-4547-ba1e-f192e868d836 913 0 2026-03-14 00:10:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7c59b89955 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d whisker-7c59b89955-dhdxx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2007351ea9 [] [] }} ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.859 [INFO][4049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.889 [INFO][4066] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" HandleID="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.900 [INFO][4066] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" HandleID="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"whisker-7c59b89955-dhdxx", "timestamp":"2026-03-14 00:10:11.889175883 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000210dc0)} Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.900 [INFO][4066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.900 [INFO][4066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.900 [INFO][4066] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.903 [INFO][4066] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.909 [INFO][4066] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.914 [INFO][4066] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.917 [INFO][4066] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.919 [INFO][4066] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.919 [INFO][4066] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.921 [INFO][4066] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.926 [INFO][4066] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.934 [INFO][4066] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.1/26] block=192.168.57.0/26 handle="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.935 [INFO][4066] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.1/26] handle="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.935 [INFO][4066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:11.986742 containerd[1479]: 2026-03-14 00:10:11.935 [INFO][4066] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.1/26] IPv6=[] ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" HandleID="k8s-pod-network.3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:11.988660 containerd[1479]: 2026-03-14 00:10:11.937 [INFO][4049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0", GenerateName:"whisker-7c59b89955-", Namespace:"calico-system", SelfLink:"", UID:"4a878f82-9410-4547-ba1e-f192e868d836", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 10, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c59b89955", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"whisker-7c59b89955-dhdxx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2007351ea9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:11.988660 containerd[1479]: 2026-03-14 00:10:11.937 [INFO][4049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.1/32] ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:11.988660 containerd[1479]: 2026-03-14 00:10:11.937 [INFO][4049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2007351ea9 ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:11.988660 containerd[1479]: 2026-03-14 00:10:11.961 [INFO][4049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:11.988660 containerd[1479]: 2026-03-14 00:10:11.966 [INFO][4049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0", GenerateName:"whisker-7c59b89955-", Namespace:"calico-system", SelfLink:"", UID:"4a878f82-9410-4547-ba1e-f192e868d836", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 10, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7c59b89955", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b", Pod:"whisker-7c59b89955-dhdxx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.57.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2007351ea9", MAC:"f2:c2:c8:65:1d:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:11.988660 containerd[1479]: 2026-03-14 00:10:11.983 [INFO][4049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b" Namespace="calico-system" Pod="whisker-7c59b89955-dhdxx" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--7c59b89955--dhdxx-eth0" Mar 14 00:10:12.019910 containerd[1479]: time="2026-03-14T00:10:12.018218126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:12.019910 containerd[1479]: time="2026-03-14T00:10:12.018287482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:12.019910 containerd[1479]: time="2026-03-14T00:10:12.018305521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:12.019910 containerd[1479]: time="2026-03-14T00:10:12.018397236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:12.060629 kubelet[2609]: I0314 00:10:12.060323 2609 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5008851d-8ee3-475b-854f-0622bf490d26" path="/var/lib/kubelet/pods/5008851d-8ee3-475b-854f-0622bf490d26/volumes" Mar 14 00:10:12.061744 systemd[1]: Started cri-containerd-3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b.scope - libcontainer container 3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b. Mar 14 00:10:12.123052 containerd[1479]: time="2026-03-14T00:10:12.122941353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c59b89955-dhdxx,Uid:4a878f82-9410-4547-ba1e-f192e868d836,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b\"" Mar 14 00:10:12.130395 containerd[1479]: time="2026-03-14T00:10:12.130355787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:10:12.968787 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Mar 14 00:10:13.097788 systemd-networkd[1376]: calie2007351ea9: Gained IPv6LL Mar 14 00:10:13.870625 containerd[1479]: time="2026-03-14T00:10:13.870573337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:13.871556 containerd[1479]: time="2026-03-14T00:10:13.871472172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 14 00:10:13.872565 containerd[1479]: time="2026-03-14T00:10:13.872478882Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:13.876148 containerd[1479]: time="2026-03-14T00:10:13.876114659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:13.877079 containerd[1479]: time="2026-03-14T00:10:13.876831383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.746429919s" Mar 14 00:10:13.877079 containerd[1479]: time="2026-03-14T00:10:13.876870621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 14 00:10:13.882235 containerd[1479]: time="2026-03-14T00:10:13.882160676Z" level=info msg="CreateContainer within sandbox \"3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:10:13.897673 containerd[1479]: time="2026-03-14T00:10:13.897619660Z" level=info msg="CreateContainer within sandbox \"3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5cb2214da7a515d73ca055aac2bf6b43bcd03307c267621fb03c910426732d55\"" Mar 14 00:10:13.898379 containerd[1479]: time="2026-03-14T00:10:13.898341624Z" level=info msg="StartContainer for \"5cb2214da7a515d73ca055aac2bf6b43bcd03307c267621fb03c910426732d55\"" Mar 14 00:10:13.932916 systemd[1]: Started cri-containerd-5cb2214da7a515d73ca055aac2bf6b43bcd03307c267621fb03c910426732d55.scope - libcontainer container 5cb2214da7a515d73ca055aac2bf6b43bcd03307c267621fb03c910426732d55. Mar 14 00:10:13.975917 containerd[1479]: time="2026-03-14T00:10:13.973513453Z" level=info msg="StartContainer for \"5cb2214da7a515d73ca055aac2bf6b43bcd03307c267621fb03c910426732d55\" returns successfully" Mar 14 00:10:13.975917 containerd[1479]: time="2026-03-14T00:10:13.975258686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:10:16.055771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923535391.mount: Deactivated successfully. Mar 14 00:10:16.072324 containerd[1479]: time="2026-03-14T00:10:16.072225123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:16.073327 containerd[1479]: time="2026-03-14T00:10:16.073284523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 14 00:10:16.074215 containerd[1479]: time="2026-03-14T00:10:16.074124532Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:16.078158 containerd[1479]: time="2026-03-14T00:10:16.077139500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:16.078158 containerd[1479]: time="2026-03-14T00:10:16.078029987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.102737742s" Mar 14 00:10:16.078158 containerd[1479]: time="2026-03-14T00:10:16.078063185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 14 00:10:16.083373 containerd[1479]: time="2026-03-14T00:10:16.083327029Z" level=info msg="CreateContainer within sandbox \"3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:10:16.102849 containerd[1479]: time="2026-03-14T00:10:16.102787505Z" level=info msg="CreateContainer within sandbox \"3e0adb17612df211c53f184d52483805d33c3ade7d6d4f92ffe35de1731e879b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a23ca3b8bd6cc06ee804045bca20ca01583ea423832fe37306da6c5802cccb26\"" Mar 14 00:10:16.103437 containerd[1479]: time="2026-03-14T00:10:16.103377923Z" level=info msg="StartContainer for \"a23ca3b8bd6cc06ee804045bca20ca01583ea423832fe37306da6c5802cccb26\"" Mar 14 00:10:16.142825 systemd[1]: Started cri-containerd-a23ca3b8bd6cc06ee804045bca20ca01583ea423832fe37306da6c5802cccb26.scope - libcontainer container a23ca3b8bd6cc06ee804045bca20ca01583ea423832fe37306da6c5802cccb26. Mar 14 00:10:16.183477 containerd[1479]: time="2026-03-14T00:10:16.183420381Z" level=info msg="StartContainer for \"a23ca3b8bd6cc06ee804045bca20ca01583ea423832fe37306da6c5802cccb26\" returns successfully" Mar 14 00:10:16.891665 systemd[1]: run-containerd-runc-k8s.io-a23ca3b8bd6cc06ee804045bca20ca01583ea423832fe37306da6c5802cccb26-runc.ZmwpGK.mount: Deactivated successfully. Mar 14 00:10:21.047086 containerd[1479]: time="2026-03-14T00:10:21.047021593Z" level=info msg="StopPodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\"" Mar 14 00:10:21.049643 containerd[1479]: time="2026-03-14T00:10:21.047859298Z" level=info msg="StopPodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\"" Mar 14 00:10:21.049643 containerd[1479]: time="2026-03-14T00:10:21.047036313Z" level=info msg="StopPodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\"" Mar 14 00:10:21.128744 kubelet[2609]: I0314 00:10:21.128297 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7c59b89955-dhdxx" podStartSLOduration=6.178752465 podStartE2EDuration="10.128279188s" podCreationTimestamp="2026-03-14 00:10:11 +0000 UTC" firstStartedPulling="2026-03-14 00:10:12.129892172 +0000 UTC m=+42.199762023" lastFinishedPulling="2026-03-14 00:10:16.079418935 +0000 UTC m=+46.149288746" observedRunningTime="2026-03-14 00:10:16.364133969 +0000 UTC m=+46.434003820" watchObservedRunningTime="2026-03-14 00:10:21.128279188 +0000 UTC m=+51.198149039" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.127 [INFO][4337] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.128 [INFO][4337] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" iface="eth0" netns="/var/run/netns/cni-6e1d930f-f5b3-2dd9-dc0a-23e31b4d1d48" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.129 [INFO][4337] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" iface="eth0" netns="/var/run/netns/cni-6e1d930f-f5b3-2dd9-dc0a-23e31b4d1d48" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.129 [INFO][4337] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" iface="eth0" netns="/var/run/netns/cni-6e1d930f-f5b3-2dd9-dc0a-23e31b4d1d48" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.130 [INFO][4337] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.130 [INFO][4337] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.167 [INFO][4352] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.170 [INFO][4352] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.170 [INFO][4352] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.186 [WARNING][4352] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.186 [INFO][4352] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.189 [INFO][4352] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:21.194793 containerd[1479]: 2026-03-14 00:10:21.192 [INFO][4337] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:21.195220 containerd[1479]: time="2026-03-14T00:10:21.194820091Z" level=info msg="TearDown network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" successfully" Mar 14 00:10:21.195220 containerd[1479]: time="2026-03-14T00:10:21.194846371Z" level=info msg="StopPodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" returns successfully" Mar 14 00:10:21.201692 systemd[1]: run-netns-cni\x2d6e1d930f\x2df5b3\x2d2dd9\x2ddc0a\x2d23e31b4d1d48.mount: Deactivated successfully. Mar 14 00:10:21.204469 containerd[1479]: time="2026-03-14T00:10:21.204428635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5kqn9,Uid:34f9a0e8-7ea2-403e-b90c-0c3c742508fa,Namespace:calico-system,Attempt:1,}" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.155 [INFO][4326] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.155 [INFO][4326] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" iface="eth0" netns="/var/run/netns/cni-2d36d029-6fde-08e0-93ed-1e1ba82fcaab" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.155 [INFO][4326] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" iface="eth0" netns="/var/run/netns/cni-2d36d029-6fde-08e0-93ed-1e1ba82fcaab" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.156 [INFO][4326] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" iface="eth0" netns="/var/run/netns/cni-2d36d029-6fde-08e0-93ed-1e1ba82fcaab" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.156 [INFO][4326] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.156 [INFO][4326] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.190 [INFO][4358] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.191 [INFO][4358] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.191 [INFO][4358] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.210 [WARNING][4358] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.210 [INFO][4358] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.212 [INFO][4358] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:21.217090 containerd[1479]: 2026-03-14 00:10:21.215 [INFO][4326] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:21.219795 containerd[1479]: time="2026-03-14T00:10:21.219639037Z" level=info msg="TearDown network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" successfully" Mar 14 00:10:21.219795 containerd[1479]: time="2026-03-14T00:10:21.219672517Z" level=info msg="StopPodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" returns successfully" Mar 14 00:10:21.222220 systemd[1]: run-netns-cni\x2d2d36d029\x2d6fde\x2d08e0\x2d93ed\x2d1e1ba82fcaab.mount: Deactivated successfully. Mar 14 00:10:21.228264 containerd[1479]: time="2026-03-14T00:10:21.227759649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qb2mw,Uid:668745bf-4f34-407b-807f-7c7e77d971d9,Namespace:calico-system,Attempt:1,}" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.149 [INFO][4336] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.150 [INFO][4336] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" iface="eth0" netns="/var/run/netns/cni-68233a22-1214-a37c-0147-6035daebec68" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.151 [INFO][4336] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" iface="eth0" netns="/var/run/netns/cni-68233a22-1214-a37c-0147-6035daebec68" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.155 [INFO][4336] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" iface="eth0" netns="/var/run/netns/cni-68233a22-1214-a37c-0147-6035daebec68" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.155 [INFO][4336] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.155 [INFO][4336] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.207 [INFO][4359] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.207 [INFO][4359] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.212 [INFO][4359] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.234 [WARNING][4359] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.234 [INFO][4359] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.238 [INFO][4359] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:21.243496 containerd[1479]: 2026-03-14 00:10:21.240 [INFO][4336] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:21.246259 systemd[1]: run-netns-cni\x2d68233a22\x2d1214\x2da37c\x2d0147\x2d6035daebec68.mount: Deactivated successfully. Mar 14 00:10:21.247392 containerd[1479]: time="2026-03-14T00:10:21.247164054Z" level=info msg="TearDown network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" successfully" Mar 14 00:10:21.249996 containerd[1479]: time="2026-03-14T00:10:21.249963483Z" level=info msg="StopPodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" returns successfully" Mar 14 00:10:21.254999 containerd[1479]: time="2026-03-14T00:10:21.254846394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lqkzl,Uid:750010d3-d177-45c1-adc7-1d676ed1917e,Namespace:kube-system,Attempt:1,}" Mar 14 00:10:21.421252 systemd-networkd[1376]: calif72c2efc10e: Link UP Mar 14 00:10:21.426950 systemd-networkd[1376]: calif72c2efc10e: Gained carrier Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.298 [INFO][4372] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0 goldmane-9f7667bb8- calico-system 34f9a0e8-7ea2-403e-b90c-0c3c742508fa 957 0 2026-03-14 00:09:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d goldmane-9f7667bb8-5kqn9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif72c2efc10e [] [] }} ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.298 [INFO][4372] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.343 [INFO][4404] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" HandleID="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.361 [INFO][4404] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" HandleID="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002734a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"goldmane-9f7667bb8-5kqn9", "timestamp":"2026-03-14 00:10:21.343694009 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001eaf20)} Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.361 [INFO][4404] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.361 [INFO][4404] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.361 [INFO][4404] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.365 [INFO][4404] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.373 [INFO][4404] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.382 [INFO][4404] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.388 [INFO][4404] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.391 [INFO][4404] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.391 [INFO][4404] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.395 [INFO][4404] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9 Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.403 [INFO][4404] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.414 [INFO][4404] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.2/26] block=192.168.57.0/26 handle="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.414 [INFO][4404] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.2/26] handle="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.414 [INFO][4404] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:21.451371 containerd[1479]: 2026-03-14 00:10:21.414 [INFO][4404] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.2/26] IPv6=[] ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" HandleID="k8s-pod-network.61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.453305 containerd[1479]: 2026-03-14 00:10:21.417 [INFO][4372] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"34f9a0e8-7ea2-403e-b90c-0c3c742508fa", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"goldmane-9f7667bb8-5kqn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif72c2efc10e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:21.453305 containerd[1479]: 2026-03-14 00:10:21.418 [INFO][4372] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.2/32] ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.453305 containerd[1479]: 2026-03-14 00:10:21.418 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif72c2efc10e ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.453305 containerd[1479]: 2026-03-14 00:10:21.421 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.453305 containerd[1479]: 2026-03-14 00:10:21.422 [INFO][4372] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"34f9a0e8-7ea2-403e-b90c-0c3c742508fa", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9", Pod:"goldmane-9f7667bb8-5kqn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif72c2efc10e", MAC:"ea:4c:0a:bf:88:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:21.453305 containerd[1479]: 2026-03-14 00:10:21.441 [INFO][4372] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9" Namespace="calico-system" Pod="goldmane-9f7667bb8-5kqn9" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:21.474749 containerd[1479]: time="2026-03-14T00:10:21.474501498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:21.474749 containerd[1479]: time="2026-03-14T00:10:21.474602936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:21.474749 containerd[1479]: time="2026-03-14T00:10:21.474618816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:21.475194 containerd[1479]: time="2026-03-14T00:10:21.474778733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:21.498055 systemd[1]: Started cri-containerd-61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9.scope - libcontainer container 61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9. Mar 14 00:10:21.525061 systemd-networkd[1376]: cali8354e5cc86b: Link UP Mar 14 00:10:21.525253 systemd-networkd[1376]: cali8354e5cc86b: Gained carrier Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.337 [INFO][4381] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0 csi-node-driver- calico-system 668745bf-4f34-407b-807f-7c7e77d971d9 959 0 2026-03-14 00:09:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d csi-node-driver-qb2mw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8354e5cc86b [] [] }} ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.338 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.377 [INFO][4413] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" HandleID="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.392 [INFO][4413] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" HandleID="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"csi-node-driver-qb2mw", "timestamp":"2026-03-14 00:10:21.377489032 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000254dc0)} Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.392 [INFO][4413] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.414 [INFO][4413] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.414 [INFO][4413] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.468 [INFO][4413] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.478 [INFO][4413] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.487 [INFO][4413] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.493 [INFO][4413] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.498 [INFO][4413] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.498 [INFO][4413] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.501 [INFO][4413] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56 Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.506 [INFO][4413] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.514 [INFO][4413] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.3/26] block=192.168.57.0/26 handle="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.514 [INFO][4413] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.3/26] handle="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.515 [INFO][4413] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:21.537302 containerd[1479]: 2026-03-14 00:10:21.515 [INFO][4413] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.3/26] IPv6=[] ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" HandleID="k8s-pod-network.d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.538117 containerd[1479]: 2026-03-14 00:10:21.517 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"668745bf-4f34-407b-807f-7c7e77d971d9", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"csi-node-driver-qb2mw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8354e5cc86b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:21.538117 containerd[1479]: 2026-03-14 00:10:21.518 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.3/32] ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.538117 containerd[1479]: 2026-03-14 00:10:21.518 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8354e5cc86b ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.538117 containerd[1479]: 2026-03-14 00:10:21.519 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.538117 containerd[1479]: 2026-03-14 00:10:21.520 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"668745bf-4f34-407b-807f-7c7e77d971d9", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56", Pod:"csi-node-driver-qb2mw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8354e5cc86b", MAC:"c6:c0:c6:e9:1f:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:21.538117 containerd[1479]: 2026-03-14 00:10:21.535 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56" Namespace="calico-system" Pod="csi-node-driver-qb2mw" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:21.569731 containerd[1479]: time="2026-03-14T00:10:21.569624559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-5kqn9,Uid:34f9a0e8-7ea2-403e-b90c-0c3c742508fa,Namespace:calico-system,Attempt:1,} returns sandbox id \"61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9\"" Mar 14 00:10:21.575946 containerd[1479]: time="2026-03-14T00:10:21.575796966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:10:21.582783 containerd[1479]: time="2026-03-14T00:10:21.582229328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:21.583290 containerd[1479]: time="2026-03-14T00:10:21.583057673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:21.583290 containerd[1479]: time="2026-03-14T00:10:21.583099832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:21.584648 containerd[1479]: time="2026-03-14T00:10:21.584257731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:21.618734 systemd[1]: Started cri-containerd-d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56.scope - libcontainer container d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56. Mar 14 00:10:21.638163 systemd-networkd[1376]: cali1fa8d254c37: Link UP Mar 14 00:10:21.638344 systemd-networkd[1376]: cali1fa8d254c37: Gained carrier Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.339 [INFO][4392] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0 coredns-7d764666f9- kube-system 750010d3-d177-45c1-adc7-1d676ed1917e 958 0 2026-03-14 00:09:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d coredns-7d764666f9-lqkzl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1fa8d254c37 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.340 [INFO][4392] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.401 [INFO][4418] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" HandleID="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.415 [INFO][4418] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" HandleID="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005d6010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"coredns-7d764666f9-lqkzl", "timestamp":"2026-03-14 00:10:21.401226598 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c7340)} Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.416 [INFO][4418] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.515 [INFO][4418] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.515 [INFO][4418] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.567 [INFO][4418] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.582 [INFO][4418] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.591 [INFO][4418] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.595 [INFO][4418] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.600 [INFO][4418] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.600 [INFO][4418] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.605 [INFO][4418] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.616 [INFO][4418] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.626 [INFO][4418] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.4/26] block=192.168.57.0/26 handle="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.627 [INFO][4418] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.4/26] handle="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.627 [INFO][4418] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:21.668520 containerd[1479]: 2026-03-14 00:10:21.627 [INFO][4418] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.4/26] IPv6=[] ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" HandleID="k8s-pod-network.10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.669458 containerd[1479]: 2026-03-14 00:10:21.634 [INFO][4392] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"750010d3-d177-45c1-adc7-1d676ed1917e", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"coredns-7d764666f9-lqkzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fa8d254c37", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:21.669458 containerd[1479]: 2026-03-14 00:10:21.634 [INFO][4392] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.4/32] ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.669458 containerd[1479]: 2026-03-14 00:10:21.634 [INFO][4392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fa8d254c37 ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.669458 containerd[1479]: 2026-03-14 00:10:21.637 [INFO][4392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.669458 containerd[1479]: 2026-03-14 00:10:21.641 [INFO][4392] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"750010d3-d177-45c1-adc7-1d676ed1917e", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b", Pod:"coredns-7d764666f9-lqkzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fa8d254c37", MAC:"f2:19:c8:47:07:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:21.669746 containerd[1479]: 2026-03-14 00:10:21.658 [INFO][4392] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b" Namespace="kube-system" Pod="coredns-7d764666f9-lqkzl" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:21.683627 containerd[1479]: time="2026-03-14T00:10:21.682049063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qb2mw,Uid:668745bf-4f34-407b-807f-7c7e77d971d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56\"" Mar 14 00:10:21.701592 containerd[1479]: time="2026-03-14T00:10:21.701343951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:21.701592 containerd[1479]: time="2026-03-14T00:10:21.701485268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:21.701992 containerd[1479]: time="2026-03-14T00:10:21.701881381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:21.703072 containerd[1479]: time="2026-03-14T00:10:21.702430771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:21.722848 systemd[1]: Started cri-containerd-10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b.scope - libcontainer container 10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b. Mar 14 00:10:21.756196 containerd[1479]: time="2026-03-14T00:10:21.756087590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-lqkzl,Uid:750010d3-d177-45c1-adc7-1d676ed1917e,Namespace:kube-system,Attempt:1,} returns sandbox id \"10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b\"" Mar 14 00:10:21.764643 containerd[1479]: time="2026-03-14T00:10:21.764328639Z" level=info msg="CreateContainer within sandbox \"10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:10:21.787803 containerd[1479]: time="2026-03-14T00:10:21.787660773Z" level=info msg="CreateContainer within sandbox \"10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5bb08930ead733bb9b5892d4538125668f0a7056543b879f355e68d6506475a0\"" Mar 14 00:10:21.790618 containerd[1479]: time="2026-03-14T00:10:21.789875212Z" level=info msg="StartContainer for \"5bb08930ead733bb9b5892d4538125668f0a7056543b879f355e68d6506475a0\"" Mar 14 00:10:21.821855 systemd[1]: Started cri-containerd-5bb08930ead733bb9b5892d4538125668f0a7056543b879f355e68d6506475a0.scope - libcontainer container 5bb08930ead733bb9b5892d4538125668f0a7056543b879f355e68d6506475a0. Mar 14 00:10:21.854230 containerd[1479]: time="2026-03-14T00:10:21.854188076Z" level=info msg="StartContainer for \"5bb08930ead733bb9b5892d4538125668f0a7056543b879f355e68d6506475a0\" returns successfully" Mar 14 00:10:22.047142 containerd[1479]: time="2026-03-14T00:10:22.047002871Z" level=info msg="StopPodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\"" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.111 [INFO][4647] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.111 [INFO][4647] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" iface="eth0" netns="/var/run/netns/cni-93348cdf-2ef0-1654-83ca-62f6d688d87d" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.111 [INFO][4647] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" iface="eth0" netns="/var/run/netns/cni-93348cdf-2ef0-1654-83ca-62f6d688d87d" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.112 [INFO][4647] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" iface="eth0" netns="/var/run/netns/cni-93348cdf-2ef0-1654-83ca-62f6d688d87d" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.112 [INFO][4647] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.112 [INFO][4647] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.142 [INFO][4655] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.142 [INFO][4655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.142 [INFO][4655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.153 [WARNING][4655] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.153 [INFO][4655] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.156 [INFO][4655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:22.163397 containerd[1479]: 2026-03-14 00:10:22.160 [INFO][4647] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:22.164735 containerd[1479]: time="2026-03-14T00:10:22.164613286Z" level=info msg="TearDown network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" successfully" Mar 14 00:10:22.164735 containerd[1479]: time="2026-03-14T00:10:22.164658486Z" level=info msg="StopPodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" returns successfully" Mar 14 00:10:22.167884 containerd[1479]: time="2026-03-14T00:10:22.167849758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-m9tx7,Uid:67c2f56e-bd26-464f-9efc-470b8df89e0c,Namespace:kube-system,Attempt:1,}" Mar 14 00:10:22.205850 systemd[1]: run-netns-cni\x2d93348cdf\x2d2ef0\x2d1654\x2d83ca\x2d62f6d688d87d.mount: Deactivated successfully. Mar 14 00:10:22.320520 systemd-networkd[1376]: cali51aa6c0f505: Link UP Mar 14 00:10:22.322703 systemd-networkd[1376]: cali51aa6c0f505: Gained carrier Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.228 [INFO][4673] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0 coredns-7d764666f9- kube-system 67c2f56e-bd26-464f-9efc-470b8df89e0c 978 0 2026-03-14 00:09:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d coredns-7d764666f9-m9tx7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51aa6c0f505 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.228 [INFO][4673] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.257 [INFO][4684] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" HandleID="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.269 [INFO][4684] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" HandleID="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"coredns-7d764666f9-m9tx7", "timestamp":"2026-03-14 00:10:22.257636226 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400024d080)} Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.269 [INFO][4684] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.270 [INFO][4684] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.270 [INFO][4684] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.274 [INFO][4684] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.281 [INFO][4684] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.286 [INFO][4684] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.289 [INFO][4684] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.292 [INFO][4684] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.292 [INFO][4684] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.295 [INFO][4684] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42 Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.299 [INFO][4684] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.308 [INFO][4684] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.5/26] block=192.168.57.0/26 handle="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.308 [INFO][4684] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.5/26] handle="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.308 [INFO][4684] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:22.341514 containerd[1479]: 2026-03-14 00:10:22.308 [INFO][4684] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.5/26] IPv6=[] ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" HandleID="k8s-pod-network.cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.343418 containerd[1479]: 2026-03-14 00:10:22.310 [INFO][4673] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"67c2f56e-bd26-464f-9efc-470b8df89e0c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"coredns-7d764666f9-m9tx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aa6c0f505", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:22.343418 containerd[1479]: 2026-03-14 00:10:22.311 [INFO][4673] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.5/32] ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.343418 containerd[1479]: 2026-03-14 00:10:22.311 [INFO][4673] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51aa6c0f505 ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.343418 containerd[1479]: 2026-03-14 00:10:22.320 [INFO][4673] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.343418 containerd[1479]: 2026-03-14 00:10:22.322 [INFO][4673] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"67c2f56e-bd26-464f-9efc-470b8df89e0c", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42", Pod:"coredns-7d764666f9-m9tx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aa6c0f505", MAC:"a2:cf:18:cd:b6:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:22.344245 containerd[1479]: 2026-03-14 00:10:22.337 [INFO][4673] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42" Namespace="kube-system" Pod="coredns-7d764666f9-m9tx7" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:22.376474 containerd[1479]: time="2026-03-14T00:10:22.376353905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:22.377964 containerd[1479]: time="2026-03-14T00:10:22.377882602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:22.378272 containerd[1479]: time="2026-03-14T00:10:22.377947801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:22.378500 containerd[1479]: time="2026-03-14T00:10:22.378451514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:22.424790 systemd[1]: Started cri-containerd-cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42.scope - libcontainer container cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42. Mar 14 00:10:22.448471 kubelet[2609]: I0314 00:10:22.448408 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-lqkzl" podStartSLOduration=46.448393596 podStartE2EDuration="46.448393596s" podCreationTimestamp="2026-03-14 00:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:10:22.426707278 +0000 UTC m=+52.496577129" watchObservedRunningTime="2026-03-14 00:10:22.448393596 +0000 UTC m=+52.518263447" Mar 14 00:10:22.492194 containerd[1479]: time="2026-03-14T00:10:22.491935510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-m9tx7,Uid:67c2f56e-bd26-464f-9efc-470b8df89e0c,Namespace:kube-system,Attempt:1,} returns sandbox id \"cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42\"" Mar 14 00:10:22.499029 containerd[1479]: time="2026-03-14T00:10:22.498866807Z" level=info msg="CreateContainer within sandbox \"cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:10:22.504833 systemd-networkd[1376]: calif72c2efc10e: Gained IPv6LL Mar 14 00:10:22.523297 containerd[1479]: time="2026-03-14T00:10:22.523155647Z" level=info msg="CreateContainer within sandbox \"cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"13c4870b99c34e450e24a0888ec5ae30c60ffe32f35f5f1ba67b48c2741e09e9\"" Mar 14 00:10:22.524058 containerd[1479]: time="2026-03-14T00:10:22.524033914Z" level=info msg="StartContainer for \"13c4870b99c34e450e24a0888ec5ae30c60ffe32f35f5f1ba67b48c2741e09e9\"" Mar 14 00:10:22.557804 systemd[1]: Started cri-containerd-13c4870b99c34e450e24a0888ec5ae30c60ffe32f35f5f1ba67b48c2741e09e9.scope - libcontainer container 13c4870b99c34e450e24a0888ec5ae30c60ffe32f35f5f1ba67b48c2741e09e9. Mar 14 00:10:22.587409 containerd[1479]: time="2026-03-14T00:10:22.587045539Z" level=info msg="StartContainer for \"13c4870b99c34e450e24a0888ec5ae30c60ffe32f35f5f1ba67b48c2741e09e9\" returns successfully" Mar 14 00:10:23.046560 containerd[1479]: time="2026-03-14T00:10:23.046449476Z" level=info msg="StopPodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\"" Mar 14 00:10:23.048412 containerd[1479]: time="2026-03-14T00:10:23.047646822Z" level=info msg="StopPodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\"" Mar 14 00:10:23.085772 systemd-networkd[1376]: cali8354e5cc86b: Gained IPv6LL Mar 14 00:10:23.208660 systemd-networkd[1376]: cali1fa8d254c37: Gained IPv6LL Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.128 [INFO][4816] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.128 [INFO][4816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" iface="eth0" netns="/var/run/netns/cni-bb895283-3f9b-c790-52ef-544e23a85ce1" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.128 [INFO][4816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" iface="eth0" netns="/var/run/netns/cni-bb895283-3f9b-c790-52ef-544e23a85ce1" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.129 [INFO][4816] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" iface="eth0" netns="/var/run/netns/cni-bb895283-3f9b-c790-52ef-544e23a85ce1" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.129 [INFO][4816] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.129 [INFO][4816] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.178 [INFO][4831] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.178 [INFO][4831] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.178 [INFO][4831] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.201 [WARNING][4831] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.202 [INFO][4831] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.206 [INFO][4831] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:23.215976 containerd[1479]: 2026-03-14 00:10:23.208 [INFO][4816] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:23.218830 containerd[1479]: time="2026-03-14T00:10:23.218687855Z" level=info msg="TearDown network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" successfully" Mar 14 00:10:23.218830 containerd[1479]: time="2026-03-14T00:10:23.218732015Z" level=info msg="StopPodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" returns successfully" Mar 14 00:10:23.219315 systemd[1]: run-netns-cni\x2dbb895283\x2d3f9b\x2dc790\x2d52ef\x2d544e23a85ce1.mount: Deactivated successfully. Mar 14 00:10:23.225400 containerd[1479]: time="2026-03-14T00:10:23.225205940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-q78r4,Uid:01b2b6b9-d251-4025-a4b5-f21cd42a4542,Namespace:calico-system,Attempt:1,}" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.157 [INFO][4820] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.157 [INFO][4820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" iface="eth0" netns="/var/run/netns/cni-31d23f6a-77e5-f9c2-6f17-590d32bc7047" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.157 [INFO][4820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" iface="eth0" netns="/var/run/netns/cni-31d23f6a-77e5-f9c2-6f17-590d32bc7047" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.157 [INFO][4820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" iface="eth0" netns="/var/run/netns/cni-31d23f6a-77e5-f9c2-6f17-590d32bc7047" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.157 [INFO][4820] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.157 [INFO][4820] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.207 [INFO][4840] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.207 [INFO][4840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.207 [INFO][4840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.227 [WARNING][4840] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.227 [INFO][4840] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.229 [INFO][4840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:23.234255 containerd[1479]: 2026-03-14 00:10:23.232 [INFO][4820] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:23.237025 systemd[1]: run-netns-cni\x2d31d23f6a\x2d77e5\x2df9c2\x2d6f17\x2d590d32bc7047.mount: Deactivated successfully. Mar 14 00:10:23.237353 containerd[1479]: time="2026-03-14T00:10:23.237299521Z" level=info msg="TearDown network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" successfully" Mar 14 00:10:23.237353 containerd[1479]: time="2026-03-14T00:10:23.237346281Z" level=info msg="StopPodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" returns successfully" Mar 14 00:10:23.240390 containerd[1479]: time="2026-03-14T00:10:23.240349926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fff8f6d65-6s5j2,Uid:2e989e9d-2731-40ad-ae30-02ef1e6a05b7,Namespace:calico-system,Attempt:1,}" Mar 14 00:10:23.403217 systemd-networkd[1376]: cali51aa6c0f505: Gained IPv6LL Mar 14 00:10:23.406348 systemd-networkd[1376]: calibd6935e7393: Link UP Mar 14 00:10:23.406573 systemd-networkd[1376]: calibd6935e7393: Gained carrier Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.296 [INFO][4851] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0 calico-apiserver-677d8bcf85- calico-system 01b2b6b9-d251-4025-a4b5-f21cd42a4542 996 0 2026-03-14 00:09:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677d8bcf85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d calico-apiserver-677d8bcf85-q78r4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibd6935e7393 [] [] }} ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.297 [INFO][4851] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.330 [INFO][4873] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" HandleID="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.343 [INFO][4873] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" HandleID="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb3e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"calico-apiserver-677d8bcf85-q78r4", "timestamp":"2026-03-14 00:10:23.330918805 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002daf20)} Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.343 [INFO][4873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.343 [INFO][4873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.343 [INFO][4873] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.348 [INFO][4873] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.355 [INFO][4873] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.363 [INFO][4873] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.367 [INFO][4873] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.372 [INFO][4873] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.375 [INFO][4873] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.378 [INFO][4873] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34 Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.385 [INFO][4873] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.394 [INFO][4873] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.6/26] block=192.168.57.0/26 handle="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.394 [INFO][4873] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.6/26] handle="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.394 [INFO][4873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:23.432910 containerd[1479]: 2026-03-14 00:10:23.394 [INFO][4873] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.6/26] IPv6=[] ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" HandleID="k8s-pod-network.04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.435763 containerd[1479]: 2026-03-14 00:10:23.398 [INFO][4851] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"01b2b6b9-d251-4025-a4b5-f21cd42a4542", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"calico-apiserver-677d8bcf85-q78r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd6935e7393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:23.435763 containerd[1479]: 2026-03-14 00:10:23.398 [INFO][4851] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.6/32] ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.435763 containerd[1479]: 2026-03-14 00:10:23.399 [INFO][4851] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd6935e7393 ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.435763 containerd[1479]: 2026-03-14 00:10:23.408 [INFO][4851] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.435763 containerd[1479]: 2026-03-14 00:10:23.408 [INFO][4851] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"01b2b6b9-d251-4025-a4b5-f21cd42a4542", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34", Pod:"calico-apiserver-677d8bcf85-q78r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd6935e7393", MAC:"d6:26:29:6f:47:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:23.435763 containerd[1479]: 2026-03-14 00:10:23.428 [INFO][4851] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-q78r4" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:23.442029 kubelet[2609]: I0314 00:10:23.441057 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-m9tx7" podStartSLOduration=47.441040178 podStartE2EDuration="47.441040178s" podCreationTimestamp="2026-03-14 00:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:10:23.438581247 +0000 UTC m=+53.508451098" watchObservedRunningTime="2026-03-14 00:10:23.441040178 +0000 UTC m=+53.510910029" Mar 14 00:10:23.477594 containerd[1479]: time="2026-03-14T00:10:23.477448160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:23.477594 containerd[1479]: time="2026-03-14T00:10:23.477518239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:23.478236 containerd[1479]: time="2026-03-14T00:10:23.477543479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:23.480332 containerd[1479]: time="2026-03-14T00:10:23.480180848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:23.510745 systemd[1]: Started cri-containerd-04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34.scope - libcontainer container 04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34. Mar 14 00:10:23.538107 systemd-networkd[1376]: cali6f856a0229a: Link UP Mar 14 00:10:23.540468 systemd-networkd[1376]: cali6f856a0229a: Gained carrier Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.316 [INFO][4866] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0 calico-kube-controllers-fff8f6d65- calico-system 2e989e9d-2731-40ad-ae30-02ef1e6a05b7 997 0 2026-03-14 00:09:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fff8f6d65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d calico-kube-controllers-fff8f6d65-6s5j2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f856a0229a [] [] }} ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.316 [INFO][4866] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.362 [INFO][4881] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" HandleID="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.374 [INFO][4881] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" HandleID="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273420), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"calico-kube-controllers-fff8f6d65-6s5j2", "timestamp":"2026-03-14 00:10:23.362009487 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001f7080)} Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.374 [INFO][4881] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.394 [INFO][4881] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.395 [INFO][4881] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.451 [INFO][4881] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.460 [INFO][4881] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.475 [INFO][4881] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.487 [INFO][4881] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.492 [INFO][4881] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.492 [INFO][4881] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.495 [INFO][4881] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4 Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.515 [INFO][4881] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.525 [INFO][4881] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.7/26] block=192.168.57.0/26 handle="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.525 [INFO][4881] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.7/26] handle="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.525 [INFO][4881] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:23.576722 containerd[1479]: 2026-03-14 00:10:23.525 [INFO][4881] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.7/26] IPv6=[] ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" HandleID="k8s-pod-network.34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.577901 containerd[1479]: 2026-03-14 00:10:23.531 [INFO][4866] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0", GenerateName:"calico-kube-controllers-fff8f6d65-", Namespace:"calico-system", SelfLink:"", UID:"2e989e9d-2731-40ad-ae30-02ef1e6a05b7", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fff8f6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"calico-kube-controllers-fff8f6d65-6s5j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f856a0229a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:23.577901 containerd[1479]: 2026-03-14 00:10:23.531 [INFO][4866] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.7/32] ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.577901 containerd[1479]: 2026-03-14 00:10:23.531 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f856a0229a ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.577901 containerd[1479]: 2026-03-14 00:10:23.539 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.577901 containerd[1479]: 2026-03-14 00:10:23.542 [INFO][4866] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0", GenerateName:"calico-kube-controllers-fff8f6d65-", Namespace:"calico-system", SelfLink:"", UID:"2e989e9d-2731-40ad-ae30-02ef1e6a05b7", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fff8f6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4", Pod:"calico-kube-controllers-fff8f6d65-6s5j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f856a0229a", MAC:"3a:a0:1c:9a:16:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:23.577901 containerd[1479]: 2026-03-14 00:10:23.568 [INFO][4866] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4" Namespace="calico-system" Pod="calico-kube-controllers-fff8f6d65-6s5j2" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:23.582347 containerd[1479]: time="2026-03-14T00:10:23.582289634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-q78r4,Uid:01b2b6b9-d251-4025-a4b5-f21cd42a4542,Namespace:calico-system,Attempt:1,} returns sandbox id \"04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34\"" Mar 14 00:10:23.605865 containerd[1479]: time="2026-03-14T00:10:23.605595686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:23.605865 containerd[1479]: time="2026-03-14T00:10:23.605684765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:23.605865 containerd[1479]: time="2026-03-14T00:10:23.605701965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:23.605865 containerd[1479]: time="2026-03-14T00:10:23.605787324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:23.626815 systemd[1]: Started cri-containerd-34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4.scope - libcontainer container 34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4. Mar 14 00:10:23.683649 containerd[1479]: time="2026-03-14T00:10:23.683567350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fff8f6d65-6s5j2,Uid:2e989e9d-2731-40ad-ae30-02ef1e6a05b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4\"" Mar 14 00:10:24.048193 containerd[1479]: time="2026-03-14T00:10:24.047889673Z" level=info msg="StopPodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\"" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.139 [INFO][5017] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.140 [INFO][5017] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" iface="eth0" netns="/var/run/netns/cni-756144e0-7c01-d1f3-f403-58ed3637b28b" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.140 [INFO][5017] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" iface="eth0" netns="/var/run/netns/cni-756144e0-7c01-d1f3-f403-58ed3637b28b" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.142 [INFO][5017] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" iface="eth0" netns="/var/run/netns/cni-756144e0-7c01-d1f3-f403-58ed3637b28b" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.143 [INFO][5017] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.143 [INFO][5017] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.174 [INFO][5026] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.174 [INFO][5026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.174 [INFO][5026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.188 [WARNING][5026] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.188 [INFO][5026] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.191 [INFO][5026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:24.207221 containerd[1479]: 2026-03-14 00:10:24.201 [INFO][5017] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:24.208718 containerd[1479]: time="2026-03-14T00:10:24.208185228Z" level=info msg="TearDown network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" successfully" Mar 14 00:10:24.208718 containerd[1479]: time="2026-03-14T00:10:24.208219028Z" level=info msg="StopPodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" returns successfully" Mar 14 00:10:24.212637 containerd[1479]: time="2026-03-14T00:10:24.212437233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-kwfpb,Uid:b57fb4ff-b196-4410-bd4a-7db8a81fb987,Namespace:calico-system,Attempt:1,}" Mar 14 00:10:24.218928 systemd[1]: run-netns-cni\x2d756144e0\x2d7c01\x2dd1f3\x2df403\x2d58ed3637b28b.mount: Deactivated successfully. Mar 14 00:10:24.441935 systemd-networkd[1376]: calia22287bef97: Link UP Mar 14 00:10:24.444175 systemd-networkd[1376]: calia22287bef97: Gained carrier Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.313 [INFO][5032] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0 calico-apiserver-677d8bcf85- calico-system b57fb4ff-b196-4410-bd4a-7db8a81fb987 1017 0 2026-03-14 00:09:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:677d8bcf85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-0ed13f424d calico-apiserver-677d8bcf85-kwfpb eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia22287bef97 [] [] }} ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.314 [INFO][5032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.356 [INFO][5045] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" HandleID="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.371 [INFO][5045] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" HandleID="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ed4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-0ed13f424d", "pod":"calico-apiserver-677d8bcf85-kwfpb", "timestamp":"2026-03-14 00:10:24.356309324 +0000 UTC"}, Hostname:"ci-4081-3-6-n-0ed13f424d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000410dc0)} Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.371 [INFO][5045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.371 [INFO][5045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.371 [INFO][5045] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-0ed13f424d' Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.377 [INFO][5045] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.389 [INFO][5045] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.397 [INFO][5045] ipam/ipam.go 526: Trying affinity for 192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.400 [INFO][5045] ipam/ipam.go 160: Attempting to load block cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.405 [INFO][5045] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.57.0/26 host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.405 [INFO][5045] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.57.0/26 handle="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.409 [INFO][5045] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861 Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.418 [INFO][5045] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.57.0/26 handle="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.433 [INFO][5045] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.57.8/26] block=192.168.57.0/26 handle="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.433 [INFO][5045] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.57.8/26] handle="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" host="ci-4081-3-6-n-0ed13f424d" Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.433 [INFO][5045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:24.470134 containerd[1479]: 2026-03-14 00:10:24.433 [INFO][5045] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.57.8/26] IPv6=[] ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" HandleID="k8s-pod-network.1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.471003 containerd[1479]: 2026-03-14 00:10:24.437 [INFO][5032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"b57fb4ff-b196-4410-bd4a-7db8a81fb987", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"", Pod:"calico-apiserver-677d8bcf85-kwfpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia22287bef97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:24.471003 containerd[1479]: 2026-03-14 00:10:24.437 [INFO][5032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.57.8/32] ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.471003 containerd[1479]: 2026-03-14 00:10:24.437 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia22287bef97 ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.471003 containerd[1479]: 2026-03-14 00:10:24.442 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.471003 containerd[1479]: 2026-03-14 00:10:24.445 [INFO][5032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"b57fb4ff-b196-4410-bd4a-7db8a81fb987", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861", Pod:"calico-apiserver-677d8bcf85-kwfpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia22287bef97", MAC:"e6:dd:c8:0a:cc:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:24.471003 containerd[1479]: 2026-03-14 00:10:24.465 [INFO][5032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861" Namespace="calico-system" Pod="calico-apiserver-677d8bcf85-kwfpb" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:24.476475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051058008.mount: Deactivated successfully. Mar 14 00:10:24.529598 containerd[1479]: time="2026-03-14T00:10:24.527581028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:10:24.529598 containerd[1479]: time="2026-03-14T00:10:24.527672387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:10:24.529598 containerd[1479]: time="2026-03-14T00:10:24.527684467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:24.529598 containerd[1479]: time="2026-03-14T00:10:24.527774587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:10:24.569785 systemd[1]: Started cri-containerd-1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861.scope - libcontainer container 1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861. Mar 14 00:10:24.644769 containerd[1479]: time="2026-03-14T00:10:24.644711180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-677d8bcf85-kwfpb,Uid:b57fb4ff-b196-4410-bd4a-7db8a81fb987,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861\"" Mar 14 00:10:24.808964 systemd-networkd[1376]: cali6f856a0229a: Gained IPv6LL Mar 14 00:10:24.873293 systemd-networkd[1376]: calibd6935e7393: Gained IPv6LL Mar 14 00:10:24.973114 containerd[1479]: time="2026-03-14T00:10:24.972748789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:24.975565 containerd[1479]: time="2026-03-14T00:10:24.974390295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 14 00:10:24.975565 containerd[1479]: time="2026-03-14T00:10:24.974696852Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:24.977324 containerd[1479]: time="2026-03-14T00:10:24.977257991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:24.978380 containerd[1479]: time="2026-03-14T00:10:24.978207503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.402363899s" Mar 14 00:10:24.978380 containerd[1479]: time="2026-03-14T00:10:24.978249223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 14 00:10:24.980530 containerd[1479]: time="2026-03-14T00:10:24.980313366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:10:24.985070 containerd[1479]: time="2026-03-14T00:10:24.985019527Z" level=info msg="CreateContainer within sandbox \"61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:10:25.003010 containerd[1479]: time="2026-03-14T00:10:25.002953466Z" level=info msg="CreateContainer within sandbox \"61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294\"" Mar 14 00:10:25.004991 containerd[1479]: time="2026-03-14T00:10:25.004747817Z" level=info msg="StartContainer for \"20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294\"" Mar 14 00:10:25.039929 systemd[1]: Started cri-containerd-20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294.scope - libcontainer container 20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294. Mar 14 00:10:25.082000 containerd[1479]: time="2026-03-14T00:10:25.081708182Z" level=info msg="StartContainer for \"20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294\" returns successfully" Mar 14 00:10:26.410278 systemd-networkd[1376]: calia22287bef97: Gained IPv6LL Mar 14 00:10:27.719233 containerd[1479]: time="2026-03-14T00:10:27.719183970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:27.720779 containerd[1479]: time="2026-03-14T00:10:27.720597972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 14 00:10:27.720779 containerd[1479]: time="2026-03-14T00:10:27.720736732Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:27.723561 containerd[1479]: time="2026-03-14T00:10:27.723432294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:27.725578 containerd[1479]: time="2026-03-14T00:10:27.724531415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.744177929s" Mar 14 00:10:27.725578 containerd[1479]: time="2026-03-14T00:10:27.724599535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 14 00:10:27.728613 containerd[1479]: time="2026-03-14T00:10:27.728135338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:10:27.732906 containerd[1479]: time="2026-03-14T00:10:27.732864542Z" level=info msg="CreateContainer within sandbox \"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:10:27.754244 containerd[1479]: time="2026-03-14T00:10:27.754037520Z" level=info msg="CreateContainer within sandbox \"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2582e1a30d16a091a82f4add675af53632cd110f8d6ce3830b0554e5e472e5ef\"" Mar 14 00:10:27.755809 containerd[1479]: time="2026-03-14T00:10:27.755737441Z" level=info msg="StartContainer for \"2582e1a30d16a091a82f4add675af53632cd110f8d6ce3830b0554e5e472e5ef\"" Mar 14 00:10:27.801778 systemd[1]: Started cri-containerd-2582e1a30d16a091a82f4add675af53632cd110f8d6ce3830b0554e5e472e5ef.scope - libcontainer container 2582e1a30d16a091a82f4add675af53632cd110f8d6ce3830b0554e5e472e5ef. Mar 14 00:10:27.838779 containerd[1479]: time="2026-03-14T00:10:27.838714391Z" level=info msg="StartContainer for \"2582e1a30d16a091a82f4add675af53632cd110f8d6ce3830b0554e5e472e5ef\" returns successfully" Mar 14 00:10:28.466466 systemd[1]: run-containerd-runc-k8s.io-2582e1a30d16a091a82f4add675af53632cd110f8d6ce3830b0554e5e472e5ef-runc.Gp0b0k.mount: Deactivated successfully. Mar 14 00:10:30.068384 containerd[1479]: time="2026-03-14T00:10:30.068239199Z" level=info msg="StopPodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\"" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.133 [WARNING][5287] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"01b2b6b9-d251-4025-a4b5-f21cd42a4542", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34", Pod:"calico-apiserver-677d8bcf85-q78r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd6935e7393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.134 [INFO][5287] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.134 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" iface="eth0" netns="" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.134 [INFO][5287] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.134 [INFO][5287] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.183 [INFO][5294] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.183 [INFO][5294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.183 [INFO][5294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.195 [WARNING][5294] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.195 [INFO][5294] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.199 [INFO][5294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.204902 containerd[1479]: 2026-03-14 00:10:30.202 [INFO][5287] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.204902 containerd[1479]: time="2026-03-14T00:10:30.204772444Z" level=info msg="TearDown network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" successfully" Mar 14 00:10:30.204902 containerd[1479]: time="2026-03-14T00:10:30.204798284Z" level=info msg="StopPodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" returns successfully" Mar 14 00:10:30.206598 containerd[1479]: time="2026-03-14T00:10:30.206108656Z" level=info msg="RemovePodSandbox for \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\"" Mar 14 00:10:30.206598 containerd[1479]: time="2026-03-14T00:10:30.206148976Z" level=info msg="Forcibly stopping sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\"" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.256 [WARNING][5308] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"01b2b6b9-d251-4025-a4b5-f21cd42a4542", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34", Pod:"calico-apiserver-677d8bcf85-q78r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibd6935e7393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.256 [INFO][5308] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.256 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" iface="eth0" netns="" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.256 [INFO][5308] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.256 [INFO][5308] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.295 [INFO][5315] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.296 [INFO][5315] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.296 [INFO][5315] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.311 [WARNING][5315] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.311 [INFO][5315] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" HandleID="k8s-pod-network.cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--q78r4-eth0" Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.314 [INFO][5315] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.320084 containerd[1479]: 2026-03-14 00:10:30.316 [INFO][5308] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd" Mar 14 00:10:30.321659 containerd[1479]: time="2026-03-14T00:10:30.320760222Z" level=info msg="TearDown network for sandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" successfully" Mar 14 00:10:30.327306 containerd[1479]: time="2026-03-14T00:10:30.327263441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:30.327549 containerd[1479]: time="2026-03-14T00:10:30.327509283Z" level=info msg="RemovePodSandbox \"cf9c3832e5c2070adab15aef6e3e72c442d9661045d4d29ac36ae6b78e9d6ffd\" returns successfully" Mar 14 00:10:30.328309 containerd[1479]: time="2026-03-14T00:10:30.328272690Z" level=info msg="StopPodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\"" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.385 [WARNING][5332] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"668745bf-4f34-407b-807f-7c7e77d971d9", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56", Pod:"csi-node-driver-qb2mw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8354e5cc86b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.385 [INFO][5332] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.385 [INFO][5332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" iface="eth0" netns="" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.385 [INFO][5332] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.385 [INFO][5332] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.409 [INFO][5339] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.409 [INFO][5339] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.409 [INFO][5339] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.423 [WARNING][5339] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.423 [INFO][5339] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.425 [INFO][5339] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.429426 containerd[1479]: 2026-03-14 00:10:30.427 [INFO][5332] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.429426 containerd[1479]: time="2026-03-14T00:10:30.429399492Z" level=info msg="TearDown network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" successfully" Mar 14 00:10:30.429426 containerd[1479]: time="2026-03-14T00:10:30.429425932Z" level=info msg="StopPodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" returns successfully" Mar 14 00:10:30.431765 containerd[1479]: time="2026-03-14T00:10:30.430058698Z" level=info msg="RemovePodSandbox for \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\"" Mar 14 00:10:30.431765 containerd[1479]: time="2026-03-14T00:10:30.430087458Z" level=info msg="Forcibly stopping sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\"" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.503 [WARNING][5353] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"668745bf-4f34-407b-807f-7c7e77d971d9", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56", Pod:"csi-node-driver-qb2mw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8354e5cc86b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.503 [INFO][5353] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.503 [INFO][5353] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" iface="eth0" netns="" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.503 [INFO][5353] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.503 [INFO][5353] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.534 [INFO][5360] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.534 [INFO][5360] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.534 [INFO][5360] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.545 [WARNING][5360] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.545 [INFO][5360] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" HandleID="k8s-pod-network.d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Workload="ci--4081--3--6--n--0ed13f424d-k8s-csi--node--driver--qb2mw-eth0" Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.548 [INFO][5360] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.553829 containerd[1479]: 2026-03-14 00:10:30.551 [INFO][5353] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f" Mar 14 00:10:30.554967 containerd[1479]: time="2026-03-14T00:10:30.554674395Z" level=info msg="TearDown network for sandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" successfully" Mar 14 00:10:30.565068 containerd[1479]: time="2026-03-14T00:10:30.564996729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:30.565437 containerd[1479]: time="2026-03-14T00:10:30.565312012Z" level=info msg="RemovePodSandbox \"d79ec75f97c31c4049de5f5dbf9912926a977479338775832d4e998ea4deaa6f\" returns successfully" Mar 14 00:10:30.567133 containerd[1479]: time="2026-03-14T00:10:30.566769425Z" level=info msg="StopPodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\"" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.633 [WARNING][5375] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"b57fb4ff-b196-4410-bd4a-7db8a81fb987", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861", Pod:"calico-apiserver-677d8bcf85-kwfpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia22287bef97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.633 [INFO][5375] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.633 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" iface="eth0" netns="" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.633 [INFO][5375] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.633 [INFO][5375] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.665 [INFO][5383] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.665 [INFO][5383] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.665 [INFO][5383] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.680 [WARNING][5383] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.680 [INFO][5383] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.684 [INFO][5383] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.690829 containerd[1479]: 2026-03-14 00:10:30.688 [INFO][5375] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.692571 containerd[1479]: time="2026-03-14T00:10:30.692164888Z" level=info msg="TearDown network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" successfully" Mar 14 00:10:30.692571 containerd[1479]: time="2026-03-14T00:10:30.692203729Z" level=info msg="StopPodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" returns successfully" Mar 14 00:10:30.693148 containerd[1479]: time="2026-03-14T00:10:30.692991496Z" level=info msg="RemovePodSandbox for \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\"" Mar 14 00:10:30.693148 containerd[1479]: time="2026-03-14T00:10:30.693087097Z" level=info msg="Forcibly stopping sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\"" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.753 [WARNING][5397] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0", GenerateName:"calico-apiserver-677d8bcf85-", Namespace:"calico-system", SelfLink:"", UID:"b57fb4ff-b196-4410-bd4a-7db8a81fb987", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"677d8bcf85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861", Pod:"calico-apiserver-677d8bcf85-kwfpb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia22287bef97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.754 [INFO][5397] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.754 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" iface="eth0" netns="" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.754 [INFO][5397] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.754 [INFO][5397] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.799 [INFO][5405] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.800 [INFO][5405] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.800 [INFO][5405] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.814 [WARNING][5405] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.814 [INFO][5405] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" HandleID="k8s-pod-network.dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--apiserver--677d8bcf85--kwfpb-eth0" Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.817 [INFO][5405] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.826167 containerd[1479]: 2026-03-14 00:10:30.823 [INFO][5397] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938" Mar 14 00:10:30.826872 containerd[1479]: time="2026-03-14T00:10:30.826214431Z" level=info msg="TearDown network for sandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" successfully" Mar 14 00:10:30.838530 containerd[1479]: time="2026-03-14T00:10:30.838164420Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:30.839259 containerd[1479]: time="2026-03-14T00:10:30.839201629Z" level=info msg="RemovePodSandbox \"dc47ab6846fc1217701e995aade7efb5f1cc4400be32f63afd2f48f3cc3a1938\" returns successfully" Mar 14 00:10:30.840299 containerd[1479]: time="2026-03-14T00:10:30.840261359Z" level=info msg="StopPodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\"" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.910 [WARNING][5420] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"67c2f56e-bd26-464f-9efc-470b8df89e0c", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42", Pod:"coredns-7d764666f9-m9tx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aa6c0f505", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.910 [INFO][5420] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.910 [INFO][5420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" iface="eth0" netns="" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.911 [INFO][5420] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.911 [INFO][5420] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.956 [INFO][5428] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.956 [INFO][5428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.956 [INFO][5428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.970 [WARNING][5428] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.970 [INFO][5428] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.972 [INFO][5428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:30.980781 containerd[1479]: 2026-03-14 00:10:30.976 [INFO][5420] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:30.981386 containerd[1479]: time="2026-03-14T00:10:30.980779960Z" level=info msg="TearDown network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" successfully" Mar 14 00:10:30.981386 containerd[1479]: time="2026-03-14T00:10:30.980807761Z" level=info msg="StopPodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" returns successfully" Mar 14 00:10:30.981386 containerd[1479]: time="2026-03-14T00:10:30.981280725Z" level=info msg="RemovePodSandbox for \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\"" Mar 14 00:10:30.981386 containerd[1479]: time="2026-03-14T00:10:30.981309605Z" level=info msg="Forcibly stopping sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\"" Mar 14 00:10:31.073554 containerd[1479]: time="2026-03-14T00:10:31.072234820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:31.073554 containerd[1479]: time="2026-03-14T00:10:31.073058349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 14 00:10:31.073972 containerd[1479]: time="2026-03-14T00:10:31.073744718Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:31.076699 containerd[1479]: time="2026-03-14T00:10:31.076601431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:31.077741 containerd[1479]: time="2026-03-14T00:10:31.077703244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.349530826s" Mar 14 00:10:31.077807 containerd[1479]: time="2026-03-14T00:10:31.077741964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:10:31.080920 containerd[1479]: time="2026-03-14T00:10:31.080614878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:10:31.083699 containerd[1479]: time="2026-03-14T00:10:31.083613193Z" level=info msg="CreateContainer within sandbox \"04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:10:31.099679 containerd[1479]: time="2026-03-14T00:10:31.099633261Z" level=info msg="CreateContainer within sandbox \"04bea4ec1d803606f2d80dc02c99e78d5cb7f17154f015e2eb0e80ad82b8ec34\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e57b46f3f54647c779bf916b03ef9005d6bd3fb8238db94d376a613c14ecb309\"" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.039 [WARNING][5442] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"67c2f56e-bd26-464f-9efc-470b8df89e0c", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"cf6dfd2aa131286a7c75b4382d7ebe12fa999f65cca9dcee3f9c88173f9c5d42", Pod:"coredns-7d764666f9-m9tx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51aa6c0f505", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.040 [INFO][5442] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.040 [INFO][5442] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" iface="eth0" netns="" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.040 [INFO][5442] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.040 [INFO][5442] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.069 [INFO][5450] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.069 [INFO][5450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.069 [INFO][5450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.085 [WARNING][5450] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.085 [INFO][5450] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" HandleID="k8s-pod-network.e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--m9tx7-eth0" Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.088 [INFO][5450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.101579 containerd[1479]: 2026-03-14 00:10:31.090 [INFO][5442] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d" Mar 14 00:10:31.101579 containerd[1479]: time="2026-03-14T00:10:31.099831743Z" level=info msg="TearDown network for sandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" successfully" Mar 14 00:10:31.105764 containerd[1479]: time="2026-03-14T00:10:31.105711052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:31.105976 containerd[1479]: time="2026-03-14T00:10:31.105958175Z" level=info msg="RemovePodSandbox \"e18a4075f23fbb0f49938b5d0eeda386b5f729186cc55115ca9776026c0c497d\" returns successfully" Mar 14 00:10:31.106431 containerd[1479]: time="2026-03-14T00:10:31.106242018Z" level=info msg="StartContainer for \"e57b46f3f54647c779bf916b03ef9005d6bd3fb8238db94d376a613c14ecb309\"" Mar 14 00:10:31.107403 containerd[1479]: time="2026-03-14T00:10:31.107243230Z" level=info msg="StopPodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\"" Mar 14 00:10:31.203994 systemd[1]: run-containerd-runc-k8s.io-e57b46f3f54647c779bf916b03ef9005d6bd3fb8238db94d376a613c14ecb309-runc.9aTBbE.mount: Deactivated successfully. Mar 14 00:10:31.216747 systemd[1]: Started cri-containerd-e57b46f3f54647c779bf916b03ef9005d6bd3fb8238db94d376a613c14ecb309.scope - libcontainer container e57b46f3f54647c779bf916b03ef9005d6bd3fb8238db94d376a613c14ecb309. Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.168 [WARNING][5470] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0", GenerateName:"calico-kube-controllers-fff8f6d65-", Namespace:"calico-system", SelfLink:"", UID:"2e989e9d-2731-40ad-ae30-02ef1e6a05b7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fff8f6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4", Pod:"calico-kube-controllers-fff8f6d65-6s5j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f856a0229a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.168 [INFO][5470] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.168 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" iface="eth0" netns="" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.168 [INFO][5470] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.168 [INFO][5470] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.227 [INFO][5480] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.227 [INFO][5480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.227 [INFO][5480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.239 [WARNING][5480] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.239 [INFO][5480] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.241 [INFO][5480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.246809 containerd[1479]: 2026-03-14 00:10:31.244 [INFO][5470] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.246809 containerd[1479]: time="2026-03-14T00:10:31.246111616Z" level=info msg="TearDown network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" successfully" Mar 14 00:10:31.246809 containerd[1479]: time="2026-03-14T00:10:31.246138456Z" level=info msg="StopPodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" returns successfully" Mar 14 00:10:31.248485 containerd[1479]: time="2026-03-14T00:10:31.247750435Z" level=info msg="RemovePodSandbox for \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\"" Mar 14 00:10:31.248485 containerd[1479]: time="2026-03-14T00:10:31.247781795Z" level=info msg="Forcibly stopping sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\"" Mar 14 00:10:31.279585 containerd[1479]: time="2026-03-14T00:10:31.277360741Z" level=info msg="StartContainer for \"e57b46f3f54647c779bf916b03ef9005d6bd3fb8238db94d376a613c14ecb309\" returns successfully" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.301 [WARNING][5514] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0", GenerateName:"calico-kube-controllers-fff8f6d65-", Namespace:"calico-system", SelfLink:"", UID:"2e989e9d-2731-40ad-ae30-02ef1e6a05b7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fff8f6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4", Pod:"calico-kube-controllers-fff8f6d65-6s5j2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f856a0229a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.301 [INFO][5514] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.301 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" iface="eth0" netns="" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.301 [INFO][5514] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.301 [INFO][5514] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.329 [INFO][5532] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.329 [INFO][5532] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.329 [INFO][5532] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.346 [WARNING][5532] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.346 [INFO][5532] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" HandleID="k8s-pod-network.137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Workload="ci--4081--3--6--n--0ed13f424d-k8s-calico--kube--controllers--fff8f6d65--6s5j2-eth0" Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.348 [INFO][5532] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.353947 containerd[1479]: 2026-03-14 00:10:31.352 [INFO][5514] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e" Mar 14 00:10:31.355016 containerd[1479]: time="2026-03-14T00:10:31.354611526Z" level=info msg="TearDown network for sandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" successfully" Mar 14 00:10:31.359223 containerd[1479]: time="2026-03-14T00:10:31.359041298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:31.359223 containerd[1479]: time="2026-03-14T00:10:31.359181899Z" level=info msg="RemovePodSandbox \"137c854b7df1eba2ff622c34681a787d29fda313d6f0d5d74ea6e19c84e61e9e\" returns successfully" Mar 14 00:10:31.360570 containerd[1479]: time="2026-03-14T00:10:31.360226512Z" level=info msg="StopPodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\"" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.414 [WARNING][5548] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"750010d3-d177-45c1-adc7-1d676ed1917e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b", Pod:"coredns-7d764666f9-lqkzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fa8d254c37", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.414 [INFO][5548] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.414 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" iface="eth0" netns="" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.414 [INFO][5548] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.414 [INFO][5548] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.441 [INFO][5559] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.441 [INFO][5559] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.441 [INFO][5559] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.450 [WARNING][5559] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.451 [INFO][5559] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.452 [INFO][5559] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.457492 containerd[1479]: 2026-03-14 00:10:31.455 [INFO][5548] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.458275 containerd[1479]: time="2026-03-14T00:10:31.457710293Z" level=info msg="TearDown network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" successfully" Mar 14 00:10:31.458275 containerd[1479]: time="2026-03-14T00:10:31.457738933Z" level=info msg="StopPodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" returns successfully" Mar 14 00:10:31.458400 containerd[1479]: time="2026-03-14T00:10:31.458371861Z" level=info msg="RemovePodSandbox for \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\"" Mar 14 00:10:31.458430 containerd[1479]: time="2026-03-14T00:10:31.458408221Z" level=info msg="Forcibly stopping sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\"" Mar 14 00:10:31.507680 kubelet[2609]: I0314 00:10:31.506310 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-677d8bcf85-q78r4" podStartSLOduration=33.015474255 podStartE2EDuration="40.506295262s" podCreationTimestamp="2026-03-14 00:09:51 +0000 UTC" firstStartedPulling="2026-03-14 00:10:23.587923169 +0000 UTC m=+53.657793020" lastFinishedPulling="2026-03-14 00:10:31.078744176 +0000 UTC m=+61.148614027" observedRunningTime="2026-03-14 00:10:31.506199621 +0000 UTC m=+61.576069432" watchObservedRunningTime="2026-03-14 00:10:31.506295262 +0000 UTC m=+61.576165073" Mar 14 00:10:31.507680 kubelet[2609]: I0314 00:10:31.506470 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-5kqn9" podStartSLOduration=37.100271423 podStartE2EDuration="40.506465584s" podCreationTimestamp="2026-03-14 00:09:51 +0000 UTC" firstStartedPulling="2026-03-14 00:10:21.573362971 +0000 UTC m=+51.643232822" lastFinishedPulling="2026-03-14 00:10:24.979557052 +0000 UTC m=+55.049426983" observedRunningTime="2026-03-14 00:10:25.462815986 +0000 UTC m=+55.532685877" watchObservedRunningTime="2026-03-14 00:10:31.506465584 +0000 UTC m=+61.576335435" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.532 [WARNING][5574] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"750010d3-d177-45c1-adc7-1d676ed1917e", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"10c4a7703a932e0e8661dece386de2a362f4b29653b73ccdef4dbc096602b21b", Pod:"coredns-7d764666f9-lqkzl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1fa8d254c37", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.532 [INFO][5574] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.532 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" iface="eth0" netns="" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.532 [INFO][5574] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.532 [INFO][5574] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.561 [INFO][5582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.562 [INFO][5582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.562 [INFO][5582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.573 [WARNING][5582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.573 [INFO][5582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" HandleID="k8s-pod-network.76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Workload="ci--4081--3--6--n--0ed13f424d-k8s-coredns--7d764666f9--lqkzl-eth0" Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.575 [INFO][5582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.579302 containerd[1479]: 2026-03-14 00:10:31.577 [INFO][5574] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890" Mar 14 00:10:31.579851 containerd[1479]: time="2026-03-14T00:10:31.579348117Z" level=info msg="TearDown network for sandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" successfully" Mar 14 00:10:31.583892 containerd[1479]: time="2026-03-14T00:10:31.583844410Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:31.584000 containerd[1479]: time="2026-03-14T00:10:31.583929971Z" level=info msg="RemovePodSandbox \"76b1701435cb536c825bd471935f9a74273db4e6a2d0fb20b90fcfed0a38c890\" returns successfully" Mar 14 00:10:31.584519 containerd[1479]: time="2026-03-14T00:10:31.584473977Z" level=info msg="StopPodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\"" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.648 [WARNING][5597] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.649 [INFO][5597] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.649 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" iface="eth0" netns="" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.649 [INFO][5597] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.649 [INFO][5597] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.690 [INFO][5605] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.690 [INFO][5605] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.690 [INFO][5605] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.700 [WARNING][5605] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.701 [INFO][5605] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.703 [INFO][5605] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.708317 containerd[1479]: 2026-03-14 00:10:31.705 [INFO][5597] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.708317 containerd[1479]: time="2026-03-14T00:10:31.708262266Z" level=info msg="TearDown network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" successfully" Mar 14 00:10:31.708317 containerd[1479]: time="2026-03-14T00:10:31.708288627Z" level=info msg="StopPodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" returns successfully" Mar 14 00:10:31.709745 containerd[1479]: time="2026-03-14T00:10:31.709235798Z" level=info msg="RemovePodSandbox for \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\"" Mar 14 00:10:31.709745 containerd[1479]: time="2026-03-14T00:10:31.709283478Z" level=info msg="Forcibly stopping sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\"" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.759 [WARNING][5619] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" WorkloadEndpoint="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.760 [INFO][5619] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.760 [INFO][5619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" iface="eth0" netns="" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.760 [INFO][5619] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.760 [INFO][5619] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.780 [INFO][5626] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.780 [INFO][5626] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.781 [INFO][5626] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.793 [WARNING][5626] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.793 [INFO][5626] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" HandleID="k8s-pod-network.74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-whisker--5b75858f74--6sk86-eth0" Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.795 [INFO][5626] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.800908 containerd[1479]: 2026-03-14 00:10:31.796 [INFO][5619] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a" Mar 14 00:10:31.801335 containerd[1479]: time="2026-03-14T00:10:31.800900071Z" level=info msg="TearDown network for sandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" successfully" Mar 14 00:10:31.806555 containerd[1479]: time="2026-03-14T00:10:31.806466856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:31.806700 containerd[1479]: time="2026-03-14T00:10:31.806598898Z" level=info msg="RemovePodSandbox \"74d8a859e8fb576036cdf0c8623e2ba8203cb3066a773e83a53a7de9f649ac5a\" returns successfully" Mar 14 00:10:31.807440 containerd[1479]: time="2026-03-14T00:10:31.807161744Z" level=info msg="StopPodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\"" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.847 [WARNING][5641] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"34f9a0e8-7ea2-403e-b90c-0c3c742508fa", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9", Pod:"goldmane-9f7667bb8-5kqn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif72c2efc10e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.847 [INFO][5641] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.847 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" iface="eth0" netns="" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.847 [INFO][5641] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.847 [INFO][5641] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.871 [INFO][5648] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.872 [INFO][5648] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.872 [INFO][5648] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.882 [WARNING][5648] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.882 [INFO][5648] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.884 [INFO][5648] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.889290 containerd[1479]: 2026-03-14 00:10:31.886 [INFO][5641] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.890038 containerd[1479]: time="2026-03-14T00:10:31.889666110Z" level=info msg="TearDown network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" successfully" Mar 14 00:10:31.890038 containerd[1479]: time="2026-03-14T00:10:31.889695511Z" level=info msg="StopPodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" returns successfully" Mar 14 00:10:31.890571 containerd[1479]: time="2026-03-14T00:10:31.890249837Z" level=info msg="RemovePodSandbox for \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\"" Mar 14 00:10:31.890571 containerd[1479]: time="2026-03-14T00:10:31.890282637Z" level=info msg="Forcibly stopping sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\"" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.947 [WARNING][5662] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"34f9a0e8-7ea2-403e-b90c-0c3c742508fa", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 9, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-0ed13f424d", ContainerID:"61e9e494081ac2b27a7c794e9604d681f0644a0ec7fe09434003fd54cbe9c3a9", Pod:"goldmane-9f7667bb8-5kqn9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.57.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif72c2efc10e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.948 [INFO][5662] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.948 [INFO][5662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" iface="eth0" netns="" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.948 [INFO][5662] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.948 [INFO][5662] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.969 [INFO][5669] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.969 [INFO][5669] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.969 [INFO][5669] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.979 [WARNING][5669] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.979 [INFO][5669] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" HandleID="k8s-pod-network.711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Workload="ci--4081--3--6--n--0ed13f424d-k8s-goldmane--9f7667bb8--5kqn9-eth0" Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.981 [INFO][5669] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:10:31.985396 containerd[1479]: 2026-03-14 00:10:31.983 [INFO][5662] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a" Mar 14 00:10:31.987662 containerd[1479]: time="2026-03-14T00:10:31.985433991Z" level=info msg="TearDown network for sandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" successfully" Mar 14 00:10:31.999067 containerd[1479]: time="2026-03-14T00:10:31.998658106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:10:31.999067 containerd[1479]: time="2026-03-14T00:10:31.998746747Z" level=info msg="RemovePodSandbox \"711b34fa36887c958097783872a6b38508dd430037944ddea00782a60f9b7b9a\" returns successfully" Mar 14 00:10:32.492374 kubelet[2609]: I0314 00:10:32.492347 2609 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:10:32.654269 systemd[1]: run-containerd-runc-k8s.io-20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294-runc.qAKXBo.mount: Deactivated successfully. Mar 14 00:10:34.808966 containerd[1479]: time="2026-03-14T00:10:34.808823746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:34.810572 containerd[1479]: time="2026-03-14T00:10:34.810404296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 14 00:10:34.812494 containerd[1479]: time="2026-03-14T00:10:34.812318892Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:34.817343 containerd[1479]: time="2026-03-14T00:10:34.817062063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:34.818091 containerd[1479]: time="2026-03-14T00:10:34.818042881Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 3.737362362s" Mar 14 00:10:34.818091 containerd[1479]: time="2026-03-14T00:10:34.818085202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 14 00:10:34.820508 containerd[1479]: time="2026-03-14T00:10:34.820480688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:10:34.842244 containerd[1479]: time="2026-03-14T00:10:34.842211620Z" level=info msg="CreateContainer within sandbox \"34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:10:34.857296 containerd[1479]: time="2026-03-14T00:10:34.854254649Z" level=info msg="CreateContainer within sandbox \"34d56e0e815466254a886605552bf96249adcd4639c42d14a2d7e3851c61aad4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"346bf52c3ef0fbb9d73897942ec727ed05a050148d42f9bcf066ea57dd1812bc\"" Mar 14 00:10:34.857296 containerd[1479]: time="2026-03-14T00:10:34.855750238Z" level=info msg="StartContainer for \"346bf52c3ef0fbb9d73897942ec727ed05a050148d42f9bcf066ea57dd1812bc\"" Mar 14 00:10:34.893884 systemd[1]: Started cri-containerd-346bf52c3ef0fbb9d73897942ec727ed05a050148d42f9bcf066ea57dd1812bc.scope - libcontainer container 346bf52c3ef0fbb9d73897942ec727ed05a050148d42f9bcf066ea57dd1812bc. Mar 14 00:10:34.930647 containerd[1479]: time="2026-03-14T00:10:34.930501058Z" level=info msg="StartContainer for \"346bf52c3ef0fbb9d73897942ec727ed05a050148d42f9bcf066ea57dd1812bc\" returns successfully" Mar 14 00:10:35.232572 containerd[1479]: time="2026-03-14T00:10:35.232268160Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:35.234569 containerd[1479]: time="2026-03-14T00:10:35.233202979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:10:35.236150 containerd[1479]: time="2026-03-14T00:10:35.236109401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 415.174105ms" Mar 14 00:10:35.236282 containerd[1479]: time="2026-03-14T00:10:35.236262645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:10:35.237705 containerd[1479]: time="2026-03-14T00:10:35.237647474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:10:35.242396 containerd[1479]: time="2026-03-14T00:10:35.242364454Z" level=info msg="CreateContainer within sandbox \"1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:10:35.261441 containerd[1479]: time="2026-03-14T00:10:35.261364099Z" level=info msg="CreateContainer within sandbox \"1f5e467a63fa598486749cffa527dd9a82cd7d8a3a3b5a6dc31dcf46977a2861\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de78b04a3a3171fde1af6a21255d43230ecf8957ac97426eb975d97ffbebde29\"" Mar 14 00:10:35.264038 containerd[1479]: time="2026-03-14T00:10:35.262876771Z" level=info msg="StartContainer for \"de78b04a3a3171fde1af6a21255d43230ecf8957ac97426eb975d97ffbebde29\"" Mar 14 00:10:35.297747 systemd[1]: Started cri-containerd-de78b04a3a3171fde1af6a21255d43230ecf8957ac97426eb975d97ffbebde29.scope - libcontainer container de78b04a3a3171fde1af6a21255d43230ecf8957ac97426eb975d97ffbebde29. Mar 14 00:10:35.347558 containerd[1479]: time="2026-03-14T00:10:35.347094083Z" level=info msg="StartContainer for \"de78b04a3a3171fde1af6a21255d43230ecf8957ac97426eb975d97ffbebde29\" returns successfully" Mar 14 00:10:35.532093 kubelet[2609]: I0314 00:10:35.531845 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fff8f6d65-6s5j2" podStartSLOduration=31.39753425 podStartE2EDuration="42.531827734s" podCreationTimestamp="2026-03-14 00:09:53 +0000 UTC" firstStartedPulling="2026-03-14 00:10:23.686043281 +0000 UTC m=+53.755913132" lastFinishedPulling="2026-03-14 00:10:34.820336725 +0000 UTC m=+64.890206616" observedRunningTime="2026-03-14 00:10:35.530904754 +0000 UTC m=+65.600774605" watchObservedRunningTime="2026-03-14 00:10:35.531827734 +0000 UTC m=+65.601697585" Mar 14 00:10:35.604901 kubelet[2609]: I0314 00:10:35.604818 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-677d8bcf85-kwfpb" podStartSLOduration=34.01521706 podStartE2EDuration="44.604800647s" podCreationTimestamp="2026-03-14 00:09:51 +0000 UTC" firstStartedPulling="2026-03-14 00:10:24.647562956 +0000 UTC m=+54.717432807" lastFinishedPulling="2026-03-14 00:10:35.237146543 +0000 UTC m=+65.307016394" observedRunningTime="2026-03-14 00:10:35.551176066 +0000 UTC m=+65.621045917" watchObservedRunningTime="2026-03-14 00:10:35.604800647 +0000 UTC m=+65.674670498" Mar 14 00:10:37.176475 containerd[1479]: time="2026-03-14T00:10:37.176321141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:37.178038 containerd[1479]: time="2026-03-14T00:10:37.177450970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 14 00:10:37.179044 containerd[1479]: time="2026-03-14T00:10:37.178991689Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:37.183427 containerd[1479]: time="2026-03-14T00:10:37.183363841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:10:37.185574 containerd[1479]: time="2026-03-14T00:10:37.184646954Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.946743034s" Mar 14 00:10:37.185574 containerd[1479]: time="2026-03-14T00:10:37.184703915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 14 00:10:37.191621 containerd[1479]: time="2026-03-14T00:10:37.191332685Z" level=info msg="CreateContainer within sandbox \"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:10:37.209903 containerd[1479]: time="2026-03-14T00:10:37.209843760Z" level=info msg="CreateContainer within sandbox \"d0ef0a141aa7a983f73fa5a9da2cce50bcf149e5d3248bfb3f5aa6421fe7de56\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8130f3db7e2cd99b95e8a33129a34d6a59adc37dbac81888159ab98ca5b6e9c7\"" Mar 14 00:10:37.211321 containerd[1479]: time="2026-03-14T00:10:37.210485256Z" level=info msg="StartContainer for \"8130f3db7e2cd99b95e8a33129a34d6a59adc37dbac81888159ab98ca5b6e9c7\"" Mar 14 00:10:37.261843 systemd[1]: Started cri-containerd-8130f3db7e2cd99b95e8a33129a34d6a59adc37dbac81888159ab98ca5b6e9c7.scope - libcontainer container 8130f3db7e2cd99b95e8a33129a34d6a59adc37dbac81888159ab98ca5b6e9c7. Mar 14 00:10:37.308486 containerd[1479]: time="2026-03-14T00:10:37.308286323Z" level=info msg="StartContainer for \"8130f3db7e2cd99b95e8a33129a34d6a59adc37dbac81888159ab98ca5b6e9c7\" returns successfully" Mar 14 00:10:37.566195 kubelet[2609]: I0314 00:10:37.565005 2609 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-qb2mw" podStartSLOduration=29.06584355 podStartE2EDuration="44.564988981s" podCreationTimestamp="2026-03-14 00:09:53 +0000 UTC" firstStartedPulling="2026-03-14 00:10:21.687250088 +0000 UTC m=+51.757119939" lastFinishedPulling="2026-03-14 00:10:37.186395519 +0000 UTC m=+67.256265370" observedRunningTime="2026-03-14 00:10:37.564743535 +0000 UTC m=+67.634613386" watchObservedRunningTime="2026-03-14 00:10:37.564988981 +0000 UTC m=+67.634858832" Mar 14 00:10:38.192482 kubelet[2609]: I0314 00:10:38.192430 2609 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:10:38.195575 kubelet[2609]: I0314 00:10:38.195542 2609 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:10:46.811942 kubelet[2609]: I0314 00:10:46.811337 2609 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:12:00.773339 systemd[1]: Started sshd@7-46.224.38.228:22-68.220.241.50:39862.service - OpenSSH per-connection server daemon (68.220.241.50:39862). Mar 14 00:12:01.376261 sshd[6189]: Accepted publickey for core from 68.220.241.50 port 39862 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:01.378676 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:01.384691 systemd-logind[1453]: New session 8 of user core. Mar 14 00:12:01.392865 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:12:01.889242 sshd[6189]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:01.893465 systemd[1]: sshd@7-46.224.38.228:22-68.220.241.50:39862.service: Deactivated successfully. Mar 14 00:12:01.896715 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:12:01.900911 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:12:01.902246 systemd-logind[1453]: Removed session 8. Mar 14 00:12:07.004244 systemd[1]: Started sshd@8-46.224.38.228:22-68.220.241.50:42422.service - OpenSSH per-connection server daemon (68.220.241.50:42422). Mar 14 00:12:07.587375 sshd[6226]: Accepted publickey for core from 68.220.241.50 port 42422 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:07.589428 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:07.595414 systemd-logind[1453]: New session 9 of user core. Mar 14 00:12:07.603810 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:12:08.076693 sshd[6226]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:08.081044 systemd[1]: sshd@8-46.224.38.228:22-68.220.241.50:42422.service: Deactivated successfully. Mar 14 00:12:08.084967 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:12:08.086460 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:12:08.088911 systemd-logind[1453]: Removed session 9. Mar 14 00:12:13.183980 systemd[1]: Started sshd@9-46.224.38.228:22-68.220.241.50:46774.service - OpenSSH per-connection server daemon (68.220.241.50:46774). Mar 14 00:12:13.776856 sshd[6283]: Accepted publickey for core from 68.220.241.50 port 46774 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:13.779577 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:13.786652 systemd-logind[1453]: New session 10 of user core. Mar 14 00:12:13.788903 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:12:14.274852 sshd[6283]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:14.281160 systemd[1]: sshd@9-46.224.38.228:22-68.220.241.50:46774.service: Deactivated successfully. Mar 14 00:12:14.285896 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:12:14.289285 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:12:14.291095 systemd-logind[1453]: Removed session 10. Mar 14 00:12:14.384669 systemd[1]: Started sshd@10-46.224.38.228:22-68.220.241.50:46784.service - OpenSSH per-connection server daemon (68.220.241.50:46784). Mar 14 00:12:14.977739 sshd[6297]: Accepted publickey for core from 68.220.241.50 port 46784 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:14.980582 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:14.986388 systemd-logind[1453]: New session 11 of user core. Mar 14 00:12:14.993308 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:12:15.516086 sshd[6297]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:15.520450 systemd[1]: sshd@10-46.224.38.228:22-68.220.241.50:46784.service: Deactivated successfully. Mar 14 00:12:15.525081 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:12:15.525993 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:12:15.527338 systemd-logind[1453]: Removed session 11. Mar 14 00:12:15.626975 systemd[1]: Started sshd@11-46.224.38.228:22-68.220.241.50:46786.service - OpenSSH per-connection server daemon (68.220.241.50:46786). Mar 14 00:12:16.208750 sshd[6307]: Accepted publickey for core from 68.220.241.50 port 46786 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:16.210977 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:16.216730 systemd-logind[1453]: New session 12 of user core. Mar 14 00:12:16.223776 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:12:16.713864 sshd[6307]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:16.719086 systemd[1]: sshd@11-46.224.38.228:22-68.220.241.50:46786.service: Deactivated successfully. Mar 14 00:12:16.721267 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:12:16.722182 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:12:16.723493 systemd-logind[1453]: Removed session 12. Mar 14 00:12:21.833931 systemd[1]: Started sshd@12-46.224.38.228:22-68.220.241.50:46800.service - OpenSSH per-connection server daemon (68.220.241.50:46800). Mar 14 00:12:22.420890 sshd[6319]: Accepted publickey for core from 68.220.241.50 port 46800 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:22.423304 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:22.430269 systemd-logind[1453]: New session 13 of user core. Mar 14 00:12:22.433797 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:12:22.923970 sshd[6319]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:22.928426 systemd[1]: sshd@12-46.224.38.228:22-68.220.241.50:46800.service: Deactivated successfully. Mar 14 00:12:22.931783 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:12:22.935313 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:12:22.937307 systemd-logind[1453]: Removed session 13. Mar 14 00:12:23.034000 systemd[1]: Started sshd@13-46.224.38.228:22-68.220.241.50:48326.service - OpenSSH per-connection server daemon (68.220.241.50:48326). Mar 14 00:12:23.621575 sshd[6332]: Accepted publickey for core from 68.220.241.50 port 48326 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:23.623297 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:23.628881 systemd-logind[1453]: New session 14 of user core. Mar 14 00:12:23.635020 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:12:24.274906 sshd[6332]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:24.280747 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:12:24.281211 systemd[1]: sshd@13-46.224.38.228:22-68.220.241.50:48326.service: Deactivated successfully. Mar 14 00:12:24.284968 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:12:24.287391 systemd-logind[1453]: Removed session 14. Mar 14 00:12:24.384906 systemd[1]: Started sshd@14-46.224.38.228:22-68.220.241.50:48342.service - OpenSSH per-connection server daemon (68.220.241.50:48342). Mar 14 00:12:24.968549 sshd[6343]: Accepted publickey for core from 68.220.241.50 port 48342 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:24.970680 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:24.975490 systemd-logind[1453]: New session 15 of user core. Mar 14 00:12:24.983906 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:12:25.916807 sshd[6343]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:25.925500 systemd[1]: sshd@14-46.224.38.228:22-68.220.241.50:48342.service: Deactivated successfully. Mar 14 00:12:25.928625 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:12:25.929822 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:12:25.931491 systemd-logind[1453]: Removed session 15. Mar 14 00:12:26.027650 systemd[1]: Started sshd@15-46.224.38.228:22-68.220.241.50:48352.service - OpenSSH per-connection server daemon (68.220.241.50:48352). Mar 14 00:12:26.616190 sshd[6368]: Accepted publickey for core from 68.220.241.50 port 48352 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:26.617356 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:26.623824 systemd-logind[1453]: New session 16 of user core. Mar 14 00:12:26.638867 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:12:27.241805 sshd[6368]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:27.245479 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:12:27.247137 systemd[1]: sshd@15-46.224.38.228:22-68.220.241.50:48352.service: Deactivated successfully. Mar 14 00:12:27.250506 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:12:27.251844 systemd-logind[1453]: Removed session 16. Mar 14 00:12:27.353988 systemd[1]: Started sshd@16-46.224.38.228:22-68.220.241.50:48366.service - OpenSSH per-connection server daemon (68.220.241.50:48366). Mar 14 00:12:27.470414 systemd[1]: run-containerd-runc-k8s.io-20837f64e9c26b346d8a507d4de747f4b7f12404865befba1966d6865fa2b294-runc.M1MRTU.mount: Deactivated successfully. Mar 14 00:12:27.940589 sshd[6381]: Accepted publickey for core from 68.220.241.50 port 48366 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:27.942511 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:27.947436 systemd-logind[1453]: New session 17 of user core. Mar 14 00:12:27.951714 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:12:28.439485 sshd[6381]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:28.445678 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:12:28.449704 systemd[1]: sshd@16-46.224.38.228:22-68.220.241.50:48366.service: Deactivated successfully. Mar 14 00:12:28.454625 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:12:28.455981 systemd-logind[1453]: Removed session 17. Mar 14 00:12:33.554956 systemd[1]: Started sshd@17-46.224.38.228:22-68.220.241.50:45364.service - OpenSSH per-connection server daemon (68.220.241.50:45364). Mar 14 00:12:34.144626 sshd[6439]: Accepted publickey for core from 68.220.241.50 port 45364 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:34.147111 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:34.153712 systemd-logind[1453]: New session 18 of user core. Mar 14 00:12:34.157718 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:12:34.639791 sshd[6439]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:34.645595 systemd[1]: sshd@17-46.224.38.228:22-68.220.241.50:45364.service: Deactivated successfully. Mar 14 00:12:34.649052 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:12:34.651693 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:12:34.653158 systemd-logind[1453]: Removed session 18. Mar 14 00:12:39.756097 systemd[1]: Started sshd@18-46.224.38.228:22-68.220.241.50:45376.service - OpenSSH per-connection server daemon (68.220.241.50:45376). Mar 14 00:12:40.346459 sshd[6474]: Accepted publickey for core from 68.220.241.50 port 45376 ssh2: RSA SHA256:Ah127XV+5y5Yjoon4OGQ2nTrOG34dltV/xgH/axgYQk Mar 14 00:12:40.349021 sshd[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:12:40.356354 systemd-logind[1453]: New session 19 of user core. Mar 14 00:12:40.360979 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:12:40.852967 sshd[6474]: pam_unix(sshd:session): session closed for user core Mar 14 00:12:40.858781 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:12:40.859268 systemd[1]: sshd@18-46.224.38.228:22-68.220.241.50:45376.service: Deactivated successfully. Mar 14 00:12:40.861599 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:12:40.864249 systemd-logind[1453]: Removed session 19. Mar 14 00:12:55.300231 systemd[1]: cri-containerd-f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971.scope: Deactivated successfully. Mar 14 00:12:55.302716 systemd[1]: cri-containerd-f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971.scope: Consumed 3.648s CPU time, 17.2M memory peak, 0B memory swap peak. Mar 14 00:12:55.338885 containerd[1479]: time="2026-03-14T00:12:55.337846215Z" level=info msg="shim disconnected" id=f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971 namespace=k8s.io Mar 14 00:12:55.338885 containerd[1479]: time="2026-03-14T00:12:55.337908854Z" level=warning msg="cleaning up after shim disconnected" id=f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971 namespace=k8s.io Mar 14 00:12:55.338885 containerd[1479]: time="2026-03-14T00:12:55.337918974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:12:55.340522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971-rootfs.mount: Deactivated successfully. Mar 14 00:12:55.725711 kubelet[2609]: E0314 00:12:55.725397 2609 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52076->10.0.0.2:2379: read: connection timed out" Mar 14 00:12:55.990087 kubelet[2609]: I0314 00:12:55.989803 2609 scope.go:122] "RemoveContainer" containerID="f762a91573fc2de208c18069c51d631a690903f31c745fb261df4778bc64e971" Mar 14 00:12:55.994105 containerd[1479]: time="2026-03-14T00:12:55.993944321Z" level=info msg="CreateContainer within sandbox \"d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:12:55.998691 systemd[1]: cri-containerd-07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08.scope: Deactivated successfully. Mar 14 00:12:55.998959 systemd[1]: cri-containerd-07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08.scope: Consumed 19.060s CPU time. Mar 14 00:12:56.022573 containerd[1479]: time="2026-03-14T00:12:56.022351338Z" level=info msg="CreateContainer within sandbox \"d5ad850719f9193537d889b043cbd62f07ea0adc9f46cb3a8183454a29995e8b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"12e83bddb5c1fedb228895cbdee440a4d35130c81293a3f0685a763e82c0bdd6\"" Mar 14 00:12:56.024012 containerd[1479]: time="2026-03-14T00:12:56.023918838Z" level=info msg="StartContainer for \"12e83bddb5c1fedb228895cbdee440a4d35130c81293a3f0685a763e82c0bdd6\"" Mar 14 00:12:56.033925 containerd[1479]: time="2026-03-14T00:12:56.033827907Z" level=info msg="shim disconnected" id=07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08 namespace=k8s.io Mar 14 00:12:56.033925 containerd[1479]: time="2026-03-14T00:12:56.033892986Z" level=warning msg="cleaning up after shim disconnected" id=07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08 namespace=k8s.io Mar 14 00:12:56.033925 containerd[1479]: time="2026-03-14T00:12:56.033903386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:12:56.036920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08-rootfs.mount: Deactivated successfully. Mar 14 00:12:56.075741 systemd[1]: Started cri-containerd-12e83bddb5c1fedb228895cbdee440a4d35130c81293a3f0685a763e82c0bdd6.scope - libcontainer container 12e83bddb5c1fedb228895cbdee440a4d35130c81293a3f0685a763e82c0bdd6. Mar 14 00:12:56.116081 containerd[1479]: time="2026-03-14T00:12:56.116034944Z" level=info msg="StartContainer for \"12e83bddb5c1fedb228895cbdee440a4d35130c81293a3f0685a763e82c0bdd6\" returns successfully" Mar 14 00:12:56.995291 kubelet[2609]: I0314 00:12:56.994975 2609 scope.go:122] "RemoveContainer" containerID="07323179de935fb5596525bfb4fa79538c8a73c043809b38b85b71dbd37a0d08" Mar 14 00:12:57.004886 containerd[1479]: time="2026-03-14T00:12:57.004770925Z" level=info msg="CreateContainer within sandbox \"8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:12:57.020228 containerd[1479]: time="2026-03-14T00:12:57.019852186Z" level=info msg="CreateContainer within sandbox \"8f99dbff75c3fa3a1dbead003ea093d4333cda58fff9e767e296e486004ecde0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2b83b2620e8fd741b62b8a3033825bbaea167843d617749facd5bfd603b34c3b\"" Mar 14 00:12:57.022422 containerd[1479]: time="2026-03-14T00:12:57.021828683Z" level=info msg="StartContainer for \"2b83b2620e8fd741b62b8a3033825bbaea167843d617749facd5bfd603b34c3b\"" Mar 14 00:12:57.058877 systemd[1]: Started cri-containerd-2b83b2620e8fd741b62b8a3033825bbaea167843d617749facd5bfd603b34c3b.scope - libcontainer container 2b83b2620e8fd741b62b8a3033825bbaea167843d617749facd5bfd603b34c3b. Mar 14 00:12:57.088717 containerd[1479]: time="2026-03-14T00:12:57.088648648Z" level=info msg="StartContainer for \"2b83b2620e8fd741b62b8a3033825bbaea167843d617749facd5bfd603b34c3b\" returns successfully" Mar 14 00:13:00.079310 kubelet[2609]: E0314 00:13:00.074595 2609 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51714->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-0ed13f424d.189c8ccdfd034a59 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-0ed13f424d,UID:98b0eeedf9844dccccb33c15b6df4a0e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-0ed13f424d,},FirstTimestamp:2026-03-14 00:12:49.618168409 +0000 UTC m=+199.688038300,LastTimestamp:2026-03-14 00:12:49.618168409 +0000 UTC m=+199.688038300,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-0ed13f424d,}"