Mar 7 00:52:29.888658 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 00:52:29.888682 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:52:29.888692 kernel: KASLR enabled Mar 7 00:52:29.888698 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 7 00:52:29.888704 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 7 00:52:29.888710 kernel: random: crng init done Mar 7 00:52:29.888717 kernel: ACPI: Early table checksum verification disabled Mar 7 00:52:29.888723 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 7 00:52:29.888729 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 7 00:52:29.888737 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888744 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888750 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888756 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888763 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888770 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888778 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888785 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888791 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:29.888798 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 00:52:29.888804 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 7 00:52:29.888811 kernel: NUMA: Failed to initialise from firmware Mar 7 00:52:29.888817 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:52:29.888824 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 7 00:52:29.888830 kernel: Zone ranges: Mar 7 00:52:29.888837 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:52:29.888844 kernel: DMA32 empty Mar 7 00:52:29.888851 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 7 00:52:29.888857 kernel: Movable zone start for each node Mar 7 00:52:29.888864 kernel: Early memory node ranges Mar 7 00:52:29.888882 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 7 00:52:29.888890 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 7 00:52:29.888896 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 7 00:52:29.888903 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 7 00:52:29.888910 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 7 00:52:29.888916 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 7 00:52:29.888923 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 7 00:52:29.888930 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:52:29.888938 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 7 00:52:29.888945 kernel: psci: probing for conduit method from ACPI. Mar 7 00:52:29.888952 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 00:52:29.888999 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:52:29.889007 kernel: psci: Trusted OS migration not required Mar 7 00:52:29.889015 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:52:29.889024 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 7 00:52:29.889031 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:52:29.889038 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:52:29.889045 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:52:29.889052 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:52:29.889059 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:52:29.889066 kernel: CPU features: detected: Hardware dirty bit management Mar 7 00:52:29.889073 kernel: CPU features: detected: Spectre-v4 Mar 7 00:52:29.889079 kernel: CPU features: detected: Spectre-BHB Mar 7 00:52:29.889086 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 00:52:29.889095 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 00:52:29.889102 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 00:52:29.889109 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 00:52:29.889116 kernel: alternatives: applying boot alternatives Mar 7 00:52:29.889124 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:52:29.889131 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:52:29.889138 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:52:29.889145 kernel: Fallback order for Node 0: 0 Mar 7 00:52:29.889152 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 7 00:52:29.889158 kernel: Policy zone: Normal Mar 7 00:52:29.889165 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:52:29.889174 kernel: software IO TLB: area num 2. Mar 7 00:52:29.889181 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 7 00:52:29.889188 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Mar 7 00:52:29.889195 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:52:29.889202 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:52:29.889210 kernel: rcu: RCU event tracing is enabled. Mar 7 00:52:29.889217 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:52:29.889224 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:52:29.889231 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:52:29.889238 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:52:29.889245 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:52:29.889252 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:52:29.889261 kernel: GICv3: 256 SPIs implemented Mar 7 00:52:29.889267 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:52:29.889274 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:52:29.889281 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 7 00:52:29.889288 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 7 00:52:29.889295 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 7 00:52:29.889302 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:52:29.889309 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:52:29.889316 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 7 00:52:29.889323 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 7 00:52:29.889330 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:52:29.889338 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:52:29.889345 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 00:52:29.889353 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 00:52:29.889360 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 00:52:29.889367 kernel: Console: colour dummy device 80x25 Mar 7 00:52:29.889374 kernel: ACPI: Core revision 20230628 Mar 7 00:52:29.889381 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 00:52:29.889388 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:52:29.889395 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:52:29.889403 kernel: landlock: Up and running. Mar 7 00:52:29.889411 kernel: SELinux: Initializing. Mar 7 00:52:29.889419 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:52:29.889426 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:52:29.889433 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:52:29.889440 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:52:29.889448 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:52:29.889455 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:52:29.889462 kernel: Platform MSI: ITS@0x8080000 domain created Mar 7 00:52:29.889485 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 7 00:52:29.889494 kernel: Remapping and enabling EFI services. Mar 7 00:52:29.889501 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:52:29.889508 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:52:29.889516 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 7 00:52:29.889523 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 7 00:52:29.889530 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:52:29.889551 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 00:52:29.889560 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:52:29.889567 kernel: SMP: Total of 2 processors activated. Mar 7 00:52:29.889576 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:52:29.889584 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 00:52:29.889604 kernel: CPU features: detected: Common not Private translations Mar 7 00:52:29.889635 kernel: CPU features: detected: CRC32 instructions Mar 7 00:52:29.889647 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 7 00:52:29.889655 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 00:52:29.889663 kernel: CPU features: detected: LSE atomic instructions Mar 7 00:52:29.889670 kernel: CPU features: detected: Privileged Access Never Mar 7 00:52:29.889678 kernel: CPU features: detected: RAS Extension Support Mar 7 00:52:29.889689 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 7 00:52:29.889698 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:52:29.889706 kernel: alternatives: applying system-wide alternatives Mar 7 00:52:29.889715 kernel: devtmpfs: initialized Mar 7 00:52:29.889724 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:52:29.889731 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:52:29.889739 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:52:29.889747 kernel: SMBIOS 3.0.0 present. Mar 7 00:52:29.889755 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 7 00:52:29.889764 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:52:29.889772 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:52:29.889779 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:52:29.889787 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:52:29.889794 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:52:29.889802 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Mar 7 00:52:29.889809 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:52:29.889817 kernel: cpuidle: using governor menu Mar 7 00:52:29.889826 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:52:29.889833 kernel: ASID allocator initialised with 32768 entries Mar 7 00:52:29.889841 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:52:29.889849 kernel: Serial: AMBA PL011 UART driver Mar 7 00:52:29.889856 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 00:52:29.889864 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 00:52:29.889890 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:52:29.889898 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:52:29.889906 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:52:29.889915 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:52:29.889923 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:52:29.889930 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:52:29.889938 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:52:29.889946 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:52:29.889954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:52:29.889970 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:52:29.889978 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:52:29.889986 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:52:29.889996 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:52:29.890004 kernel: ACPI: Interpreter enabled Mar 7 00:52:29.890011 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:52:29.890019 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:52:29.890026 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 7 00:52:29.890034 kernel: printk: console [ttyAMA0] enabled Mar 7 00:52:29.890041 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 00:52:29.890195 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:52:29.890274 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:52:29.890341 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:52:29.890407 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 7 00:52:29.890472 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 7 00:52:29.890482 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 7 00:52:29.890490 kernel: PCI host bridge to bus 0000:00 Mar 7 00:52:29.890562 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 7 00:52:29.890626 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:52:29.890685 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 7 00:52:29.890748 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 00:52:29.890836 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 7 00:52:29.890955 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 7 00:52:29.891079 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 7 00:52:29.891149 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:52:29.891234 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.891305 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 7 00:52:29.891380 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.891449 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 7 00:52:29.891524 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.891591 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 7 00:52:29.891667 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.891735 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 7 00:52:29.891808 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.891937 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 7 00:52:29.892035 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.892105 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 7 00:52:29.892182 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.892250 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 7 00:52:29.892327 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.892393 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 7 00:52:29.892464 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:29.892532 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 7 00:52:29.892622 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 7 00:52:29.892689 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 7 00:52:29.892766 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:52:29.892837 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 7 00:52:29.894050 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:52:29.894140 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:52:29.894221 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 7 00:52:29.894300 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 7 00:52:29.894377 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 7 00:52:29.894447 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 7 00:52:29.894517 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 7 00:52:29.895894 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 7 00:52:29.896029 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 7 00:52:29.896127 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 7 00:52:29.896200 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 7 00:52:29.896272 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 7 00:52:29.896359 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 7 00:52:29.896430 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 7 00:52:29.896502 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:52:29.896583 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:52:29.896660 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 7 00:52:29.896732 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 7 00:52:29.896803 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:52:29.898929 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 7 00:52:29.899057 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:52:29.899133 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:52:29.899213 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 7 00:52:29.899281 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 7 00:52:29.899349 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 7 00:52:29.899421 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 7 00:52:29.899488 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:52:29.899555 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:52:29.899627 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 7 00:52:29.899695 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 7 00:52:29.899768 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 7 00:52:29.899851 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 7 00:52:29.900018 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:52:29.900099 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:52:29.900175 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 7 00:52:29.900243 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:52:29.900312 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:52:29.900389 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 7 00:52:29.900456 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:52:29.900522 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:52:29.900593 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 7 00:52:29.900661 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:52:29.900728 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:52:29.900800 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 7 00:52:29.903973 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:52:29.904094 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:52:29.904168 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 7 00:52:29.904236 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:29.904306 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 7 00:52:29.904376 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:29.904448 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 7 00:52:29.904524 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:29.904595 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 7 00:52:29.904664 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:29.904734 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 7 00:52:29.904801 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:29.904902 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 7 00:52:29.905018 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:29.905103 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 7 00:52:29.905171 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:29.905242 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 7 00:52:29.905310 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:29.905378 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 7 00:52:29.905449 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:29.905523 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 7 00:52:29.905634 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 7 00:52:29.905711 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 7 00:52:29.905780 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 7 00:52:29.905865 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 7 00:52:29.906020 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 7 00:52:29.906091 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 7 00:52:29.906159 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 7 00:52:29.906228 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 7 00:52:29.906301 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 7 00:52:29.906372 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 7 00:52:29.906441 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 7 00:52:29.906510 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 7 00:52:29.906577 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 7 00:52:29.906644 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 7 00:52:29.906709 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 7 00:52:29.906778 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 7 00:52:29.906859 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 7 00:52:29.906939 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 7 00:52:29.907021 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 7 00:52:29.907099 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 7 00:52:29.907175 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 7 00:52:29.907244 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:52:29.907314 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 7 00:52:29.907381 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 7 00:52:29.907452 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 7 00:52:29.907520 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 7 00:52:29.907614 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:29.907695 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 7 00:52:29.907770 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 7 00:52:29.907837 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 7 00:52:29.910080 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 7 00:52:29.910169 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:29.910249 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:52:29.910319 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 7 00:52:29.910388 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 7 00:52:29.910470 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 7 00:52:29.910554 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 7 00:52:29.910621 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:29.910695 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:52:29.910764 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 7 00:52:29.910829 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 7 00:52:29.913533 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 7 00:52:29.913629 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:29.913708 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 7 00:52:29.913786 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 7 00:52:29.913859 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 7 00:52:29.913947 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 7 00:52:29.914039 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 7 00:52:29.914111 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:29.914186 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 7 00:52:29.914254 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 7 00:52:29.914322 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 7 00:52:29.914393 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 7 00:52:29.914459 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 7 00:52:29.914525 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:29.914600 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 7 00:52:29.914667 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 7 00:52:29.914737 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 7 00:52:29.914808 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 7 00:52:29.918032 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 7 00:52:29.918137 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 7 00:52:29.918206 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:29.918278 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 7 00:52:29.918344 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 7 00:52:29.918409 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 7 00:52:29.918476 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:29.918544 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 7 00:52:29.918611 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 7 00:52:29.918682 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 7 00:52:29.918748 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:29.918816 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 7 00:52:29.918892 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:52:29.918953 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 7 00:52:29.919076 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 7 00:52:29.919142 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 7 00:52:29.919208 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:29.919277 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 7 00:52:29.919341 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 7 00:52:29.919404 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:29.919474 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 7 00:52:29.919537 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 7 00:52:29.919602 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:29.919674 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 7 00:52:29.919737 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 7 00:52:29.919813 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:29.923045 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 7 00:52:29.923147 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 7 00:52:29.923209 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:29.923288 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 7 00:52:29.923352 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 7 00:52:29.923416 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:29.923490 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 7 00:52:29.923556 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 7 00:52:29.923616 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:29.923689 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 7 00:52:29.923750 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 7 00:52:29.923809 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:29.923895 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 7 00:52:29.923993 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 7 00:52:29.924073 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:29.924085 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:52:29.924093 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:52:29.924101 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:52:29.924109 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:52:29.924117 kernel: iommu: Default domain type: Translated Mar 7 00:52:29.924125 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:52:29.924133 kernel: efivars: Registered efivars operations Mar 7 00:52:29.924144 kernel: vgaarb: loaded Mar 7 00:52:29.924152 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:52:29.924160 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:52:29.924168 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:52:29.924176 kernel: pnp: PnP ACPI init Mar 7 00:52:29.924255 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 7 00:52:29.924267 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:52:29.924275 kernel: NET: Registered PF_INET protocol family Mar 7 00:52:29.924284 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:52:29.924295 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:52:29.924303 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:52:29.924312 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:52:29.924320 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:52:29.924328 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:52:29.924336 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:52:29.924345 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:52:29.924353 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:52:29.924430 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 7 00:52:29.924445 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:52:29.924453 kernel: kvm [1]: HYP mode not available Mar 7 00:52:29.924461 kernel: Initialise system trusted keyrings Mar 7 00:52:29.924469 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:52:29.924477 kernel: Key type asymmetric registered Mar 7 00:52:29.924484 kernel: Asymmetric key parser 'x509' registered Mar 7 00:52:29.924492 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:52:29.924500 kernel: io scheduler mq-deadline registered Mar 7 00:52:29.924508 kernel: io scheduler kyber registered Mar 7 00:52:29.924518 kernel: io scheduler bfq registered Mar 7 00:52:29.924527 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:52:29.924598 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 7 00:52:29.924668 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 7 00:52:29.924739 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.924816 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 7 00:52:29.927013 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 7 00:52:29.927123 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.927203 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 7 00:52:29.927273 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 7 00:52:29.927343 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.927417 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 7 00:52:29.927491 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 7 00:52:29.927560 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.927631 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 7 00:52:29.927699 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 7 00:52:29.927766 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.927837 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 7 00:52:29.927928 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 7 00:52:29.928021 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.928095 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 7 00:52:29.928164 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 7 00:52:29.928234 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.928308 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 7 00:52:29.928384 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 7 00:52:29.928456 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.928467 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 7 00:52:29.928536 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 7 00:52:29.928605 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 7 00:52:29.928673 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:29.928683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:52:29.928694 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:52:29.928702 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:52:29.928775 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 7 00:52:29.928851 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 7 00:52:29.928863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:52:29.930957 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:52:29.931103 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 7 00:52:29.931116 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 7 00:52:29.931124 kernel: thunder_xcv, ver 1.0 Mar 7 00:52:29.931140 kernel: thunder_bgx, ver 1.0 Mar 7 00:52:29.931148 kernel: nicpf, ver 1.0 Mar 7 00:52:29.931156 kernel: nicvf, ver 1.0 Mar 7 00:52:29.931265 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:52:29.931336 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:52:29 UTC (1772844749) Mar 7 00:52:29.931347 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:52:29.931355 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 7 00:52:29.931363 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:52:29.931374 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:52:29.931385 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:52:29.931394 kernel: Segment Routing with IPv6 Mar 7 00:52:29.931404 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:52:29.931413 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:52:29.931422 kernel: Key type dns_resolver registered Mar 7 00:52:29.931430 kernel: registered taskstats version 1 Mar 7 00:52:29.931438 kernel: Loading compiled-in X.509 certificates Mar 7 00:52:29.931446 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:52:29.931456 kernel: Key type .fscrypt registered Mar 7 00:52:29.931464 kernel: Key type fscrypt-provisioning registered Mar 7 00:52:29.931471 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:52:29.931480 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:52:29.931488 kernel: ima: No architecture policies found Mar 7 00:52:29.931496 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:52:29.931505 kernel: clk: Disabling unused clocks Mar 7 00:52:29.931512 kernel: Freeing unused kernel memory: 39424K Mar 7 00:52:29.931520 kernel: Run /init as init process Mar 7 00:52:29.931530 kernel: with arguments: Mar 7 00:52:29.931538 kernel: /init Mar 7 00:52:29.931545 kernel: with environment: Mar 7 00:52:29.931553 kernel: HOME=/ Mar 7 00:52:29.931560 kernel: TERM=linux Mar 7 00:52:29.931570 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:52:29.931581 systemd[1]: Detected virtualization kvm. Mar 7 00:52:29.931589 systemd[1]: Detected architecture arm64. Mar 7 00:52:29.931599 systemd[1]: Running in initrd. Mar 7 00:52:29.931608 systemd[1]: No hostname configured, using default hostname. Mar 7 00:52:29.931616 systemd[1]: Hostname set to . Mar 7 00:52:29.931624 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:52:29.931633 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:52:29.931641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:29.931650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:29.931659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:52:29.931669 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:52:29.931678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:52:29.931688 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:52:29.931698 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:52:29.931707 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:52:29.931716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:29.931724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:29.931735 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:52:29.931743 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:52:29.931752 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:52:29.931760 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:52:29.931768 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:52:29.931777 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:52:29.931786 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:52:29.931795 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:52:29.931805 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:29.931814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:29.931822 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:29.931831 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:52:29.931840 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:52:29.931848 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:52:29.931856 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:52:29.931865 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:52:29.932918 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:52:29.932936 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:52:29.932945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:29.932954 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:52:29.933008 systemd-journald[237]: Collecting audit messages is disabled. Mar 7 00:52:29.933034 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:29.933043 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:52:29.933052 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:52:29.933061 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:29.933071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:52:29.933080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:29.933090 systemd-journald[237]: Journal started Mar 7 00:52:29.933110 systemd-journald[237]: Runtime Journal (/run/log/journal/2628d838e3be4ef8a9c385ae1b336d73) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:52:29.915414 systemd-modules-load[238]: Inserted module 'overlay' Mar 7 00:52:29.936026 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:29.936070 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:52:29.937891 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:52:29.938137 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:29.944172 kernel: Bridge firewalling registered Mar 7 00:52:29.942929 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 7 00:52:29.949235 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:52:29.953928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:29.962113 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:52:29.966122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:29.968819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:29.970726 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:29.978109 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:52:29.983204 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:52:29.990107 dracut-cmdline[271]: dracut-dracut-053 Mar 7 00:52:29.996445 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:52:30.021688 systemd-resolved[275]: Positive Trust Anchors: Mar 7 00:52:30.021703 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:52:30.021734 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:52:30.032246 systemd-resolved[275]: Defaulting to hostname 'linux'. Mar 7 00:52:30.033836 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:52:30.035340 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:30.071941 kernel: SCSI subsystem initialized Mar 7 00:52:30.075922 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:52:30.083922 kernel: iscsi: registered transport (tcp) Mar 7 00:52:30.097933 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:52:30.098009 kernel: QLogic iSCSI HBA Driver Mar 7 00:52:30.148733 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:52:30.157294 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:52:30.176088 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:52:30.176170 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:52:30.176194 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:52:30.230936 kernel: raid6: neonx8 gen() 15635 MB/s Mar 7 00:52:30.247952 kernel: raid6: neonx4 gen() 13724 MB/s Mar 7 00:52:30.264913 kernel: raid6: neonx2 gen() 13078 MB/s Mar 7 00:52:30.281944 kernel: raid6: neonx1 gen() 10343 MB/s Mar 7 00:52:30.298926 kernel: raid6: int64x8 gen() 6873 MB/s Mar 7 00:52:30.315944 kernel: raid6: int64x4 gen() 7291 MB/s Mar 7 00:52:30.332927 kernel: raid6: int64x2 gen() 6064 MB/s Mar 7 00:52:30.350076 kernel: raid6: int64x1 gen() 5005 MB/s Mar 7 00:52:30.350174 kernel: raid6: using algorithm neonx8 gen() 15635 MB/s Mar 7 00:52:30.366934 kernel: raid6: .... xor() 11826 MB/s, rmw enabled Mar 7 00:52:30.367042 kernel: raid6: using neon recovery algorithm Mar 7 00:52:30.371902 kernel: xor: measuring software checksum speed Mar 7 00:52:30.371947 kernel: 8regs : 19745 MB/sec Mar 7 00:52:30.371984 kernel: 32regs : 17552 MB/sec Mar 7 00:52:30.372922 kernel: arm64_neon : 27007 MB/sec Mar 7 00:52:30.372972 kernel: xor: using function: arm64_neon (27007 MB/sec) Mar 7 00:52:30.424941 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:52:30.439338 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:52:30.445093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:30.468321 systemd-udevd[457]: Using default interface naming scheme 'v255'. Mar 7 00:52:30.475086 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:30.487141 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:52:30.507091 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Mar 7 00:52:30.545039 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:52:30.552110 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:52:30.600099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:30.606111 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:52:30.632392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:52:30.634474 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:52:30.635195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:30.637202 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:52:30.645075 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:52:30.664179 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:52:30.713909 kernel: scsi host0: Virtio SCSI HBA Mar 7 00:52:30.723897 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 00:52:30.727553 kernel: ACPI: bus type USB registered Mar 7 00:52:30.727600 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 00:52:30.727635 kernel: usbcore: registered new interface driver usbfs Mar 7 00:52:30.729234 kernel: usbcore: registered new interface driver hub Mar 7 00:52:30.729285 kernel: usbcore: registered new device driver usb Mar 7 00:52:30.736537 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:52:30.736670 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:30.740030 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:30.740799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:30.741162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:30.742788 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:30.750130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:30.772172 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 7 00:52:30.773910 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 7 00:52:30.774121 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 00:52:30.775887 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 7 00:52:30.785561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:30.793070 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:30.796746 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 7 00:52:30.796981 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 7 00:52:30.797081 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 7 00:52:30.797165 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 7 00:52:30.797246 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 00:52:30.803937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:52:30.804017 kernel: GPT:17805311 != 80003071 Mar 7 00:52:30.804030 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:52:30.806482 kernel: GPT:17805311 != 80003071 Mar 7 00:52:30.806539 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:52:30.806551 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:30.810906 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 7 00:52:30.816617 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:52:30.816825 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 7 00:52:30.820887 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 7 00:52:30.823249 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:52:30.823430 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 7 00:52:30.823518 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 7 00:52:30.825993 kernel: hub 1-0:1.0: USB hub found Mar 7 00:52:30.826206 kernel: hub 1-0:1.0: 4 ports detected Mar 7 00:52:30.830277 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 7 00:52:30.830465 kernel: hub 2-0:1.0: USB hub found Mar 7 00:52:30.830613 kernel: hub 2-0:1.0: 4 ports detected Mar 7 00:52:30.829359 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:30.864884 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (517) Mar 7 00:52:30.864944 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (529) Mar 7 00:52:30.871586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 00:52:30.884123 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 00:52:30.890806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 00:52:30.893700 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 00:52:30.900291 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:52:30.911243 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:52:30.918880 disk-uuid[576]: Primary Header is updated. Mar 7 00:52:30.918880 disk-uuid[576]: Secondary Entries is updated. Mar 7 00:52:30.918880 disk-uuid[576]: Secondary Header is updated. Mar 7 00:52:30.924904 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:30.932903 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:31.075054 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 7 00:52:31.212263 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 7 00:52:31.212357 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 7 00:52:31.212710 kernel: usbcore: registered new interface driver usbhid Mar 7 00:52:31.213046 kernel: usbhid: USB HID core driver Mar 7 00:52:31.318088 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 7 00:52:31.448927 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 7 00:52:31.502928 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 7 00:52:31.936137 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:31.937906 disk-uuid[577]: The operation has completed successfully. Mar 7 00:52:31.983472 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:52:31.983583 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:52:32.002201 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:52:32.006812 sh[592]: Success Mar 7 00:52:32.023106 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:52:32.085155 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:52:32.089027 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:52:32.091917 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:52:32.106498 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:52:32.106564 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:32.106583 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:52:32.106599 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:52:32.107337 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:52:32.113914 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:52:32.116070 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:52:32.118741 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:52:32.132207 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:52:32.136190 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:52:32.150640 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:32.150696 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:32.150708 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:32.158997 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:32.159057 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:32.168980 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:32.168810 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:52:32.177690 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:52:32.183114 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:52:32.275059 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:52:32.281271 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:52:32.289485 ignition[682]: Ignition 2.19.0 Mar 7 00:52:32.289494 ignition[682]: Stage: fetch-offline Mar 7 00:52:32.294285 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:52:32.289530 ignition[682]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:32.289539 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:32.289730 ignition[682]: parsed url from cmdline: "" Mar 7 00:52:32.289733 ignition[682]: no config URL provided Mar 7 00:52:32.289738 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:52:32.289744 ignition[682]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:52:32.289749 ignition[682]: failed to fetch config: resource requires networking Mar 7 00:52:32.290121 ignition[682]: Ignition finished successfully Mar 7 00:52:32.314846 systemd-networkd[780]: lo: Link UP Mar 7 00:52:32.314856 systemd-networkd[780]: lo: Gained carrier Mar 7 00:52:32.316487 systemd-networkd[780]: Enumeration completed Mar 7 00:52:32.316588 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:52:32.317516 systemd[1]: Reached target network.target - Network. Mar 7 00:52:32.318451 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:32.318454 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:32.320201 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:32.320205 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:32.321752 systemd-networkd[780]: eth0: Link UP Mar 7 00:52:32.321756 systemd-networkd[780]: eth0: Gained carrier Mar 7 00:52:32.321763 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:32.325100 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:52:32.326296 systemd-networkd[780]: eth1: Link UP Mar 7 00:52:32.326299 systemd-networkd[780]: eth1: Gained carrier Mar 7 00:52:32.326305 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:32.338773 ignition[783]: Ignition 2.19.0 Mar 7 00:52:32.338797 ignition[783]: Stage: fetch Mar 7 00:52:32.339069 ignition[783]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:32.339081 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:32.339183 ignition[783]: parsed url from cmdline: "" Mar 7 00:52:32.339187 ignition[783]: no config URL provided Mar 7 00:52:32.339192 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:52:32.339202 ignition[783]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:52:32.339222 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 7 00:52:32.339904 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 00:52:32.365024 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:52:32.377008 systemd-networkd[780]: eth0: DHCPv4 address 88.99.14.23/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:52:32.540072 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 7 00:52:32.546269 ignition[783]: GET result: OK Mar 7 00:52:32.546415 ignition[783]: parsing config with SHA512: d2ac8237c123e0788b867bb63c799f7c530a68f0f1a7f9f90fc8b6071b222f5aae397800d968b2785f4239c2058ea1a935b97d9994eab4485e68458b2cc02bb9 Mar 7 00:52:32.552307 unknown[783]: fetched base config from "system" Mar 7 00:52:32.552345 unknown[783]: fetched base config from "system" Mar 7 00:52:32.552732 ignition[783]: fetch: fetch complete Mar 7 00:52:32.552351 unknown[783]: fetched user config from "hetzner" Mar 7 00:52:32.552737 ignition[783]: fetch: fetch passed Mar 7 00:52:32.555547 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:52:32.552779 ignition[783]: Ignition finished successfully Mar 7 00:52:32.562050 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:52:32.574233 ignition[790]: Ignition 2.19.0 Mar 7 00:52:32.574242 ignition[790]: Stage: kargs Mar 7 00:52:32.574411 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:32.574420 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:32.575379 ignition[790]: kargs: kargs passed Mar 7 00:52:32.575431 ignition[790]: Ignition finished successfully Mar 7 00:52:32.579223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:52:32.587274 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:52:32.600450 ignition[796]: Ignition 2.19.0 Mar 7 00:52:32.600462 ignition[796]: Stage: disks Mar 7 00:52:32.600627 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:32.600637 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:32.603227 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:52:32.601573 ignition[796]: disks: disks passed Mar 7 00:52:32.601623 ignition[796]: Ignition finished successfully Mar 7 00:52:32.604735 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:52:32.605936 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:52:32.606964 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:52:32.607837 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:52:32.609042 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:52:32.617208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:52:32.634735 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 00:52:32.640190 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:52:32.644014 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:52:32.697232 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:52:32.698172 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:52:32.699493 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:52:32.708075 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:52:32.711360 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:52:32.713718 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 00:52:32.717976 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:52:32.718020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:52:32.721699 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:52:32.729887 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (812) Mar 7 00:52:32.731048 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:52:32.737919 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:32.737973 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:32.737986 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:32.740970 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:32.741012 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:32.743778 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:52:32.787379 coreos-metadata[814]: Mar 07 00:52:32.787 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 7 00:52:32.790232 coreos-metadata[814]: Mar 07 00:52:32.789 INFO Fetch successful Mar 7 00:52:32.791344 coreos-metadata[814]: Mar 07 00:52:32.791 INFO wrote hostname ci-4081-3-6-n-d5610c1cbf to /sysroot/etc/hostname Mar 7 00:52:32.793244 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:52:32.794737 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:52:32.801611 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:52:32.808079 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:52:32.812458 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:52:32.927181 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:52:32.935034 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:52:32.938105 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:52:32.947954 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:32.966266 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:52:32.971588 ignition[929]: INFO : Ignition 2.19.0 Mar 7 00:52:32.972562 ignition[929]: INFO : Stage: mount Mar 7 00:52:32.973001 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:32.973001 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:32.975041 ignition[929]: INFO : mount: mount passed Mar 7 00:52:32.975041 ignition[929]: INFO : Ignition finished successfully Mar 7 00:52:32.975769 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:52:32.982018 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:52:33.105845 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:52:33.118196 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:52:33.128970 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Mar 7 00:52:33.130994 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:33.131051 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:33.131068 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:33.134919 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:33.134983 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:33.137652 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:52:33.159637 ignition[958]: INFO : Ignition 2.19.0 Mar 7 00:52:33.159637 ignition[958]: INFO : Stage: files Mar 7 00:52:33.160863 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:33.160863 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:33.160863 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:52:33.164218 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:52:33.164218 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:52:33.167175 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:52:33.167175 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:52:33.167175 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:52:33.165716 unknown[958]: wrote ssh authorized keys file for user: core Mar 7 00:52:33.170714 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:52:33.170714 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:52:33.233655 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 00:52:33.343135 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:52:33.344287 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:52:33.344287 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:52:33.344287 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:52:33.344287 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:52:33.344287 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:52:33.349790 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 7 00:52:33.598763 systemd-networkd[780]: eth1: Gained IPv6LL Mar 7 00:52:33.644516 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 00:52:33.662917 systemd-networkd[780]: eth0: Gained IPv6LL Mar 7 00:52:33.874448 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 7 00:52:33.874448 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:52:33.878992 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:52:33.878992 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:52:33.878992 ignition[958]: INFO : files: files passed Mar 7 00:52:33.878992 ignition[958]: INFO : Ignition finished successfully Mar 7 00:52:33.880391 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:52:33.890951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:52:33.895902 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:52:33.897977 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:52:33.898658 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:52:33.913260 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:33.913260 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:33.916043 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:33.918291 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:52:33.919285 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:52:33.924087 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:52:33.952372 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:52:33.952675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:52:33.954563 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:52:33.955663 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:52:33.956742 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:52:33.962152 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:52:33.977185 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:52:33.982208 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:52:33.993749 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:33.994599 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:33.995829 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:52:33.996810 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:52:33.996976 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:52:33.998205 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:52:33.998789 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:52:33.999896 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:52:34.000918 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:52:34.001940 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:52:34.003136 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:52:34.004199 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:52:34.005420 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:52:34.006510 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:52:34.007704 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:52:34.008583 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:52:34.008715 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:52:34.009980 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:34.010580 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:34.011619 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:52:34.011690 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:34.012753 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:52:34.012877 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:52:34.014475 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:52:34.014585 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:52:34.015764 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:52:34.015853 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:52:34.017053 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 00:52:34.017146 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:52:34.028199 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:52:34.032650 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:52:34.033218 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:52:34.033338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:34.037191 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:52:34.037291 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:52:34.044632 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:52:34.045318 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:52:34.050264 ignition[1009]: INFO : Ignition 2.19.0 Mar 7 00:52:34.050264 ignition[1009]: INFO : Stage: umount Mar 7 00:52:34.052862 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:34.052862 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:34.052862 ignition[1009]: INFO : umount: umount passed Mar 7 00:52:34.052862 ignition[1009]: INFO : Ignition finished successfully Mar 7 00:52:34.053276 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:52:34.053389 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:52:34.054469 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:52:34.054516 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:52:34.055510 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:52:34.055549 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:52:34.056744 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:52:34.056785 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:52:34.060136 systemd[1]: Stopped target network.target - Network. Mar 7 00:52:34.062944 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:52:34.063005 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:52:34.063729 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:52:34.064592 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:52:34.068974 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:34.070511 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:52:34.071695 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:52:34.073160 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:52:34.073210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:52:34.073979 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:52:34.074013 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:52:34.074766 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:52:34.074817 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:52:34.075663 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:52:34.075699 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:52:34.076675 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:52:34.077853 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:52:34.079559 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:52:34.080561 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:52:34.080647 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:52:34.081861 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:52:34.083217 systemd-networkd[780]: eth1: DHCPv6 lease lost Mar 7 00:52:34.083658 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:52:34.085358 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:52:34.085467 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:52:34.086980 systemd-networkd[780]: eth0: DHCPv6 lease lost Mar 7 00:52:34.089359 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:52:34.089475 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:52:34.090840 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:52:34.091010 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:34.098105 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:52:34.099282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:52:34.099355 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:52:34.100530 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:52:34.100584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:34.101685 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:52:34.101735 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:34.102979 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:52:34.103026 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:34.104601 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:34.119705 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:52:34.119823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:52:34.128199 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:52:34.128484 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:34.131626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:52:34.131678 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:34.132664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:52:34.132692 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:34.133874 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:52:34.133919 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:52:34.135467 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:52:34.135507 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:52:34.136822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:52:34.136864 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:34.150205 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:52:34.152236 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:52:34.152345 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:34.153487 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 00:52:34.153527 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:34.154220 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:52:34.154257 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:34.155436 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:34.155474 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:34.162014 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:52:34.162131 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:52:34.163385 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:52:34.170157 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:52:34.177721 systemd[1]: Switching root. Mar 7 00:52:34.207532 systemd-journald[237]: Journal stopped Mar 7 00:52:35.072558 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 7 00:52:35.072630 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:52:35.072653 kernel: SELinux: policy capability open_perms=1 Mar 7 00:52:35.072665 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:52:35.072679 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:52:35.072689 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:52:35.072700 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:52:35.072710 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:52:35.072724 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:52:35.072734 kernel: audit: type=1403 audit(1772844754.365:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:52:35.072744 systemd[1]: Successfully loaded SELinux policy in 33.492ms. Mar 7 00:52:35.072768 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.582ms. Mar 7 00:52:35.072781 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:52:35.072793 systemd[1]: Detected virtualization kvm. Mar 7 00:52:35.072804 systemd[1]: Detected architecture arm64. Mar 7 00:52:35.072814 systemd[1]: Detected first boot. Mar 7 00:52:35.072825 systemd[1]: Hostname set to . Mar 7 00:52:35.072837 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:52:35.072847 zram_generator::config[1052]: No configuration found. Mar 7 00:52:35.072858 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:52:35.079122 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 00:52:35.079152 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 00:52:35.079172 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 00:52:35.079184 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:52:35.079194 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:52:35.079232 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:52:35.079247 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:52:35.079258 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:52:35.079269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:52:35.079284 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:52:35.079294 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:52:35.079304 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:35.084979 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:35.085010 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:52:35.085021 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:52:35.085032 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:52:35.085054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:52:35.085065 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 00:52:35.085079 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:35.085090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 00:52:35.085101 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 00:52:35.085111 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 00:52:35.085122 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:52:35.085132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:35.085145 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:52:35.085155 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:52:35.085165 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:52:35.085175 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:52:35.085186 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:52:35.085196 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:35.085207 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:35.085217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:35.085227 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:52:35.085239 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:52:35.085250 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:52:35.085260 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:52:35.085270 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:52:35.085280 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:52:35.085291 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:52:35.085302 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:52:35.085313 systemd[1]: Reached target machines.target - Containers. Mar 7 00:52:35.085323 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:52:35.085335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:35.085346 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:52:35.085361 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:52:35.085372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:35.085383 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:52:35.085396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:35.085406 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:52:35.085417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:35.085427 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:52:35.085438 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 00:52:35.085448 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 00:52:35.085458 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 00:52:35.085468 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 00:52:35.085479 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:52:35.085490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:52:35.085501 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:52:35.085512 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:52:35.085523 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:52:35.085533 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 00:52:35.085543 systemd[1]: Stopped verity-setup.service. Mar 7 00:52:35.085553 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:52:35.085563 kernel: ACPI: bus type drm_connector registered Mar 7 00:52:35.086984 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:52:35.087077 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:52:35.087112 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:52:35.087143 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:52:35.087161 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:52:35.087182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:35.087194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:52:35.087204 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:52:35.087255 systemd-journald[1122]: Collecting audit messages is disabled. Mar 7 00:52:35.087282 kernel: fuse: init (API version 7.39) Mar 7 00:52:35.087293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:35.087304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:35.087317 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:35.087330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:35.087343 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:52:35.087353 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:52:35.087363 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:52:35.087374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:52:35.087386 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:52:35.087397 kernel: loop: module loaded Mar 7 00:52:35.087409 systemd-journald[1122]: Journal started Mar 7 00:52:35.087452 systemd-journald[1122]: Runtime Journal (/run/log/journal/2628d838e3be4ef8a9c385ae1b336d73) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:52:35.087500 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:34.836204 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:52:34.856472 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 00:52:34.856854 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 00:52:35.091283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:35.094888 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:52:35.095665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:35.097916 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:52:35.098843 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:52:35.112449 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:52:35.129165 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:52:35.138103 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:52:35.140017 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:52:35.140066 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:52:35.142787 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:52:35.161132 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:52:35.167129 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:52:35.167856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:35.171052 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:52:35.175550 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:52:35.178171 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:35.188076 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:52:35.189127 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:35.193128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:52:35.200468 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:52:35.204129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:52:35.212930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:35.214090 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:52:35.217278 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:52:35.218175 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:52:35.227275 systemd-journald[1122]: Time spent on flushing to /var/log/journal/2628d838e3be4ef8a9c385ae1b336d73 is 38.501ms for 1126 entries. Mar 7 00:52:35.227275 systemd-journald[1122]: System Journal (/var/log/journal/2628d838e3be4ef8a9c385ae1b336d73) is 8.0M, max 584.8M, 576.8M free. Mar 7 00:52:35.280972 systemd-journald[1122]: Received client request to flush runtime journal. Mar 7 00:52:35.281018 kernel: loop0: detected capacity change from 0 to 8 Mar 7 00:52:35.281033 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:52:35.228593 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:52:35.242707 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:52:35.250166 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:52:35.259143 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:52:35.286974 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:52:35.288900 kernel: loop1: detected capacity change from 0 to 197488 Mar 7 00:52:35.307097 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 7 00:52:35.310116 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 7 00:52:35.310131 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Mar 7 00:52:35.314289 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:35.315754 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:52:35.318602 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:52:35.321979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:35.336111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:52:35.347906 kernel: loop2: detected capacity change from 0 to 114432 Mar 7 00:52:35.364957 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:52:35.371168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:52:35.384899 kernel: loop3: detected capacity change from 0 to 114328 Mar 7 00:52:35.391699 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 7 00:52:35.391717 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 7 00:52:35.399449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:35.426910 kernel: loop4: detected capacity change from 0 to 8 Mar 7 00:52:35.430902 kernel: loop5: detected capacity change from 0 to 197488 Mar 7 00:52:35.451911 kernel: loop6: detected capacity change from 0 to 114432 Mar 7 00:52:35.465960 kernel: loop7: detected capacity change from 0 to 114328 Mar 7 00:52:35.484527 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 7 00:52:35.485798 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 7 00:52:35.494337 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:52:35.494357 systemd[1]: Reloading... Mar 7 00:52:35.617903 zram_generator::config[1222]: No configuration found. Mar 7 00:52:35.683902 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:52:35.757050 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:52:35.810714 systemd[1]: Reloading finished in 315 ms. Mar 7 00:52:35.832747 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:52:35.836523 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:52:35.846499 systemd[1]: Starting ensure-sysext.service... Mar 7 00:52:35.849154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:52:35.858725 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:52:35.858742 systemd[1]: Reloading... Mar 7 00:52:35.890901 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:52:35.891205 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:52:35.891967 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:52:35.892233 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 7 00:52:35.892284 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 7 00:52:35.901595 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:52:35.901611 systemd-tmpfiles[1260]: Skipping /boot Mar 7 00:52:35.928785 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:52:35.928803 systemd-tmpfiles[1260]: Skipping /boot Mar 7 00:52:35.948898 zram_generator::config[1288]: No configuration found. Mar 7 00:52:36.051290 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:52:36.105348 systemd[1]: Reloading finished in 246 ms. Mar 7 00:52:36.122650 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:52:36.129633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:36.144386 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:52:36.149298 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:52:36.155201 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:52:36.164124 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:52:36.174656 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:36.182265 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:52:36.186149 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:36.188974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:36.191559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:36.196179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:36.197162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:36.205191 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:52:36.208994 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:52:36.219263 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:52:36.222536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:36.222711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:36.225841 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Mar 7 00:52:36.228258 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:36.230128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:52:36.232222 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:36.241319 systemd[1]: Finished ensure-sysext.service. Mar 7 00:52:36.258228 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 00:52:36.259336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:36.261208 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:52:36.277325 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:52:36.278857 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:36.279070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:36.281219 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:36.281377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:36.295441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:52:36.296038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:36.296081 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:52:36.296400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:36.297968 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:36.301636 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:36.303821 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:52:36.308756 augenrules[1373]: No rules Mar 7 00:52:36.310700 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:52:36.315291 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:52:36.315995 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:52:36.349556 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:52:36.375736 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 7 00:52:36.446529 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 00:52:36.447360 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:52:36.490845 systemd-networkd[1369]: lo: Link UP Mar 7 00:52:36.491954 systemd-networkd[1369]: lo: Gained carrier Mar 7 00:52:36.493654 systemd-networkd[1369]: Enumeration completed Mar 7 00:52:36.494604 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:52:36.495209 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:36.495300 systemd-networkd[1369]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:36.496331 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:36.496406 systemd-networkd[1369]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:36.497145 systemd-networkd[1369]: eth0: Link UP Mar 7 00:52:36.497237 systemd-networkd[1369]: eth0: Gained carrier Mar 7 00:52:36.497298 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:36.503206 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:52:36.503371 systemd-networkd[1369]: eth1: Link UP Mar 7 00:52:36.503377 systemd-networkd[1369]: eth1: Gained carrier Mar 7 00:52:36.503397 systemd-networkd[1369]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:36.540688 systemd-resolved[1336]: Positive Trust Anchors: Mar 7 00:52:36.540709 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:52:36.540741 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:52:36.547644 systemd-resolved[1336]: Using system hostname 'ci-4081-3-6-n-d5610c1cbf'. Mar 7 00:52:36.550005 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:52:36.551528 systemd[1]: Reached target network.target - Network. Mar 7 00:52:36.552384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:36.558982 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1363) Mar 7 00:52:36.561382 systemd-networkd[1369]: eth0: DHCPv4 address 88.99.14.23/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:52:36.564977 systemd-networkd[1369]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:52:36.565392 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Mar 7 00:52:36.565463 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Mar 7 00:52:36.565481 systemd-networkd[1369]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:36.566206 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Mar 7 00:52:36.579956 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 00:52:36.598418 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:52:36.611519 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:52:36.649631 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:52:36.655282 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 7 00:52:36.655417 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:36.662699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:36.666960 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:36.670404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:36.671279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:36.671319 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:52:36.682008 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:36.682180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:36.691191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:36.691542 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:36.692789 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:36.695667 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:36.695832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:36.697296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:36.736255 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:36.739081 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 7 00:52:36.739230 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 7 00:52:36.739263 kernel: [drm] features: -context_init Mar 7 00:52:36.741901 kernel: [drm] number of scanouts: 1 Mar 7 00:52:36.742049 kernel: [drm] number of cap sets: 0 Mar 7 00:52:36.742198 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 7 00:52:36.748168 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 00:52:36.753948 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 7 00:52:36.764679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:36.764988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:36.772473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:36.830402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:36.884663 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:52:36.891312 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:52:36.907269 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:52:36.934484 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:52:36.936261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:36.937443 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:52:36.939082 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:52:36.940079 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:52:36.940950 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:52:36.941568 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:52:36.942338 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:52:36.942987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:52:36.943024 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:52:36.943474 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:52:36.945455 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:52:36.947833 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:52:36.953280 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:52:36.956093 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:52:36.958619 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:52:36.959521 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:52:36.960074 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:52:36.960590 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:52:36.960622 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:52:36.963032 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:52:36.966998 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:52:36.970212 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:52:36.975756 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:52:36.982167 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:52:36.986100 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:52:36.986671 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:52:36.990129 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:52:36.992865 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:52:36.999040 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 7 00:52:37.004480 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:52:37.015646 dbus-daemon[1449]: [system] SELinux support is enabled Mar 7 00:52:37.020148 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:52:37.030525 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:52:37.032630 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:52:37.033195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:52:37.035080 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:52:37.041991 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:52:37.043080 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:52:37.048293 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:52:37.052019 jq[1450]: false Mar 7 00:52:37.055812 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:52:37.055862 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:52:37.057713 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:52:37.057729 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:52:37.063815 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:52:37.064026 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:52:37.070888 jq[1462]: true Mar 7 00:52:37.069715 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:52:37.070334 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:52:37.088159 extend-filesystems[1451]: Found loop4 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found loop5 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found loop6 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found loop7 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda1 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda2 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda3 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found usr Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda4 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda6 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda7 Mar 7 00:52:37.099843 extend-filesystems[1451]: Found sda9 Mar 7 00:52:37.099843 extend-filesystems[1451]: Checking size of /dev/sda9 Mar 7 00:52:37.135085 jq[1475]: true Mar 7 00:52:37.135175 tar[1465]: linux-arm64/LICENSE Mar 7 00:52:37.135175 tar[1465]: linux-arm64/helm Mar 7 00:52:37.108257 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:52:37.125149 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:52:37.126947 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:52:37.147720 update_engine[1461]: I20260307 00:52:37.138195 1461 main.cc:92] Flatcar Update Engine starting Mar 7 00:52:37.145962 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:52:37.148090 extend-filesystems[1451]: Resized partition /dev/sda9 Mar 7 00:52:37.151419 coreos-metadata[1448]: Mar 07 00:52:37.140 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 7 00:52:37.151419 coreos-metadata[1448]: Mar 07 00:52:37.144 INFO Fetch successful Mar 7 00:52:37.151419 coreos-metadata[1448]: Mar 07 00:52:37.144 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 7 00:52:37.163470 update_engine[1461]: I20260307 00:52:37.149056 1461 update_check_scheduler.cc:74] Next update check in 3m6s Mar 7 00:52:37.156290 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:52:37.163601 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:52:37.166963 coreos-metadata[1448]: Mar 07 00:52:37.151 INFO Fetch successful Mar 7 00:52:37.178985 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 7 00:52:37.223625 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:52:37.226895 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:52:37.261786 systemd-logind[1459]: New seat seat0. Mar 7 00:52:37.272857 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:52:37.272928 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 7 00:52:37.273143 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:52:37.312646 bash[1522]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:52:37.317824 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:52:37.333976 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1370) Mar 7 00:52:37.336403 systemd[1]: Starting sshkeys.service... Mar 7 00:52:37.358025 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:52:37.366420 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:52:37.403970 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 7 00:52:37.409851 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:52:37.421607 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 00:52:37.421607 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 7 00:52:37.421607 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 7 00:52:37.429588 extend-filesystems[1451]: Resized filesystem in /dev/sda9 Mar 7 00:52:37.429588 extend-filesystems[1451]: Found sr0 Mar 7 00:52:37.424471 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:52:37.424666 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:52:37.446346 containerd[1479]: time="2026-03-07T00:52:37.446241880Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:52:37.477944 containerd[1479]: time="2026-03-07T00:52:37.477881720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.478194 coreos-metadata[1530]: Mar 07 00:52:37.478 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 7 00:52:37.480326 containerd[1479]: time="2026-03-07T00:52:37.480270200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:37.480417 containerd[1479]: time="2026-03-07T00:52:37.480402320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:52:37.480495 containerd[1479]: time="2026-03-07T00:52:37.480482440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:52:37.480692 containerd[1479]: time="2026-03-07T00:52:37.480675280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.480760640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.480845720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.480859480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481059280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481076000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481089400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481098720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481166240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481349880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481450080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:37.481610 containerd[1479]: time="2026-03-07T00:52:37.481465280Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:52:37.481857 containerd[1479]: time="2026-03-07T00:52:37.481538360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:52:37.481857 containerd[1479]: time="2026-03-07T00:52:37.481575520Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:52:37.482519 coreos-metadata[1530]: Mar 07 00:52:37.482 INFO Fetch successful Mar 7 00:52:37.486395 unknown[1530]: wrote ssh authorized keys file for user: core Mar 7 00:52:37.488542 containerd[1479]: time="2026-03-07T00:52:37.488509800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:52:37.488651 containerd[1479]: time="2026-03-07T00:52:37.488638960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:52:37.488785 containerd[1479]: time="2026-03-07T00:52:37.488770800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:52:37.488898 containerd[1479]: time="2026-03-07T00:52:37.488883640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:52:37.489302 containerd[1479]: time="2026-03-07T00:52:37.489000600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:52:37.489302 containerd[1479]: time="2026-03-07T00:52:37.489175000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:52:37.489637 containerd[1479]: time="2026-03-07T00:52:37.489617880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:52:37.489802 containerd[1479]: time="2026-03-07T00:52:37.489783360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:52:37.489865 containerd[1479]: time="2026-03-07T00:52:37.489851160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:52:37.489949 containerd[1479]: time="2026-03-07T00:52:37.489935400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:52:37.489999 containerd[1479]: time="2026-03-07T00:52:37.489988120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490049240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490075800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490093920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490109200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490122120Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490134160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490145760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490168040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490184360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490196600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490209800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490221440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490235120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.490503 containerd[1479]: time="2026-03-07T00:52:37.490246600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490259640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490272080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490287480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490303760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490316000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490328880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490345280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490368800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490381720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.491564 containerd[1479]: time="2026-03-07T00:52:37.490393480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:52:37.491983 containerd[1479]: time="2026-03-07T00:52:37.491756600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.491789920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.492550920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.492573960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.492586920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.492609240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.492619840Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:52:37.493666 containerd[1479]: time="2026-03-07T00:52:37.492630080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:52:37.495667 containerd[1479]: time="2026-03-07T00:52:37.494032600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:52:37.495667 containerd[1479]: time="2026-03-07T00:52:37.494105200Z" level=info msg="Connect containerd service" Mar 7 00:52:37.495667 containerd[1479]: time="2026-03-07T00:52:37.494147000Z" level=info msg="using legacy CRI server" Mar 7 00:52:37.495667 containerd[1479]: time="2026-03-07T00:52:37.494154320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:52:37.495667 containerd[1479]: time="2026-03-07T00:52:37.494267080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:52:37.496302 containerd[1479]: time="2026-03-07T00:52:37.496276480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:52:37.496549 containerd[1479]: time="2026-03-07T00:52:37.496518680Z" level=info msg="Start subscribing containerd event" Mar 7 00:52:37.496632 containerd[1479]: time="2026-03-07T00:52:37.496620840Z" level=info msg="Start recovering state" Mar 7 00:52:37.496735 containerd[1479]: time="2026-03-07T00:52:37.496722120Z" level=info msg="Start event monitor" Mar 7 00:52:37.496794 containerd[1479]: time="2026-03-07T00:52:37.496782120Z" level=info msg="Start snapshots syncer" Mar 7 00:52:37.496848 containerd[1479]: time="2026-03-07T00:52:37.496838640Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:52:37.496960 containerd[1479]: time="2026-03-07T00:52:37.496946840Z" level=info msg="Start streaming server" Mar 7 00:52:37.497772 containerd[1479]: time="2026-03-07T00:52:37.497751320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:52:37.497965 containerd[1479]: time="2026-03-07T00:52:37.497949640Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:52:37.500736 containerd[1479]: time="2026-03-07T00:52:37.499223400Z" level=info msg="containerd successfully booted in 0.056658s" Mar 7 00:52:37.500131 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:52:37.518099 update-ssh-keys[1539]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:52:37.519295 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:52:37.525622 systemd[1]: Finished sshkeys.service. Mar 7 00:52:37.793640 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:52:37.815233 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:52:37.824198 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:52:37.832282 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:52:37.832485 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:52:37.839392 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:52:37.849943 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:52:37.859292 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:52:37.862081 tar[1465]: linux-arm64/README.md Mar 7 00:52:37.862535 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 00:52:37.865821 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:52:37.880916 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:52:38.014194 systemd-networkd[1369]: eth1: Gained IPv6LL Mar 7 00:52:38.015169 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Mar 7 00:52:38.018312 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:52:38.019984 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:52:38.027355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:52:38.030487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:52:38.056949 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:52:38.142484 systemd-networkd[1369]: eth0: Gained IPv6LL Mar 7 00:52:38.142959 systemd-timesyncd[1353]: Network configuration changed, trying to establish connection. Mar 7 00:52:38.827616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:52:38.829072 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:52:38.834313 systemd[1]: Startup finished in 763ms (kernel) + 4.675s (initrd) + 4.501s (userspace) = 9.940s. Mar 7 00:52:38.842766 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:52:39.286381 kubelet[1579]: E0307 00:52:39.286288 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:52:39.290685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:52:39.290998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:52:44.515645 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:52:44.526412 systemd[1]: Started sshd@0-88.99.14.23:22-172.104.241.92:53992.service - OpenSSH per-connection server daemon (172.104.241.92:53992). Mar 7 00:52:44.543017 sshd[1591]: Connection closed by 172.104.241.92 port 53992 Mar 7 00:52:44.544417 systemd[1]: sshd@0-88.99.14.23:22-172.104.241.92:53992.service: Deactivated successfully. Mar 7 00:52:49.461647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:52:49.469254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:52:49.587275 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:52:49.587928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:52:49.631588 kubelet[1602]: E0307 00:52:49.631525 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:52:49.635549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:52:49.635709 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:52:59.711411 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:52:59.720259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:52:59.874152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:52:59.875667 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:52:59.913179 kubelet[1617]: E0307 00:52:59.913041 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:52:59.916349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:52:59.916486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:08.896289 systemd-timesyncd[1353]: Contacted time server 185.168.228.59:123 (2.flatcar.pool.ntp.org). Mar 7 00:53:08.896394 systemd-timesyncd[1353]: Initial clock synchronization to Sat 2026-03-07 00:53:08.896022 UTC. Mar 7 00:53:08.897448 systemd-resolved[1336]: Clock change detected. Flushing caches. Mar 7 00:53:10.414568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 00:53:10.423236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:10.538132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:10.540145 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:10.573934 kubelet[1632]: E0307 00:53:10.573822 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:10.577736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:10.578001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:13.502348 systemd[1]: Started sshd@1-88.99.14.23:22-20.161.92.111:34512.service - OpenSSH per-connection server daemon (20.161.92.111:34512). Mar 7 00:53:14.093489 sshd[1641]: Accepted publickey for core from 20.161.92.111 port 34512 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:14.096701 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:14.111448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:53:14.111493 systemd-logind[1459]: New session 1 of user core. Mar 7 00:53:14.119272 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:53:14.134635 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:53:14.146468 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:53:14.150637 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:53:14.260052 systemd[1645]: Queued start job for default target default.target. Mar 7 00:53:14.274018 systemd[1645]: Created slice app.slice - User Application Slice. Mar 7 00:53:14.274233 systemd[1645]: Reached target paths.target - Paths. Mar 7 00:53:14.274357 systemd[1645]: Reached target timers.target - Timers. Mar 7 00:53:14.276148 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:53:14.300241 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:53:14.300364 systemd[1645]: Reached target sockets.target - Sockets. Mar 7 00:53:14.300377 systemd[1645]: Reached target basic.target - Basic System. Mar 7 00:53:14.300436 systemd[1645]: Reached target default.target - Main User Target. Mar 7 00:53:14.300466 systemd[1645]: Startup finished in 141ms. Mar 7 00:53:14.300819 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:53:14.309247 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:53:14.749366 systemd[1]: Started sshd@2-88.99.14.23:22-20.161.92.111:34528.service - OpenSSH per-connection server daemon (20.161.92.111:34528). Mar 7 00:53:15.334378 sshd[1656]: Accepted publickey for core from 20.161.92.111 port 34528 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:15.336564 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:15.341739 systemd-logind[1459]: New session 2 of user core. Mar 7 00:53:15.348168 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:53:15.753786 sshd[1656]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:15.759172 systemd[1]: sshd@2-88.99.14.23:22-20.161.92.111:34528.service: Deactivated successfully. Mar 7 00:53:15.761007 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:53:15.762105 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:53:15.764194 systemd-logind[1459]: Removed session 2. Mar 7 00:53:15.856998 systemd[1]: Started sshd@3-88.99.14.23:22-20.161.92.111:34536.service - OpenSSH per-connection server daemon (20.161.92.111:34536). Mar 7 00:53:16.459134 sshd[1663]: Accepted publickey for core from 20.161.92.111 port 34536 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:16.461571 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:16.466799 systemd-logind[1459]: New session 3 of user core. Mar 7 00:53:16.474205 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:53:16.873442 sshd[1663]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:16.877370 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:53:16.877387 systemd[1]: sshd@3-88.99.14.23:22-20.161.92.111:34536.service: Deactivated successfully. Mar 7 00:53:16.878828 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:53:16.881122 systemd-logind[1459]: Removed session 3. Mar 7 00:53:16.984336 systemd[1]: Started sshd@4-88.99.14.23:22-20.161.92.111:34542.service - OpenSSH per-connection server daemon (20.161.92.111:34542). Mar 7 00:53:17.574961 sshd[1670]: Accepted publickey for core from 20.161.92.111 port 34542 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:17.576390 sshd[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:17.583344 systemd-logind[1459]: New session 4 of user core. Mar 7 00:53:17.591213 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:53:17.995326 sshd[1670]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:18.001195 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:53:18.002024 systemd[1]: sshd@4-88.99.14.23:22-20.161.92.111:34542.service: Deactivated successfully. Mar 7 00:53:18.005995 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:53:18.007251 systemd-logind[1459]: Removed session 4. Mar 7 00:53:18.107404 systemd[1]: Started sshd@5-88.99.14.23:22-20.161.92.111:34546.service - OpenSSH per-connection server daemon (20.161.92.111:34546). Mar 7 00:53:18.693576 sshd[1677]: Accepted publickey for core from 20.161.92.111 port 34546 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:18.695365 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:18.701725 systemd-logind[1459]: New session 5 of user core. Mar 7 00:53:18.717255 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:53:19.026380 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:53:19.026688 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:19.045404 sudo[1680]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:19.140592 sshd[1677]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:19.146181 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:53:19.146723 systemd[1]: sshd@5-88.99.14.23:22-20.161.92.111:34546.service: Deactivated successfully. Mar 7 00:53:19.150193 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:53:19.152091 systemd-logind[1459]: Removed session 5. Mar 7 00:53:19.251275 systemd[1]: Started sshd@6-88.99.14.23:22-20.161.92.111:34552.service - OpenSSH per-connection server daemon (20.161.92.111:34552). Mar 7 00:53:19.849459 sshd[1685]: Accepted publickey for core from 20.161.92.111 port 34552 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:19.851860 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:19.856960 systemd-logind[1459]: New session 6 of user core. Mar 7 00:53:19.865217 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:53:20.175768 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:53:20.176096 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:20.180579 sudo[1689]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:20.186122 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:53:20.186394 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:20.203328 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:20.213588 auditctl[1692]: No rules Mar 7 00:53:20.214247 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:53:20.214436 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:20.221764 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:20.255123 augenrules[1710]: No rules Mar 7 00:53:20.257356 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:20.260697 sudo[1688]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:20.356414 sshd[1685]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:20.361951 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:53:20.362567 systemd[1]: sshd@6-88.99.14.23:22-20.161.92.111:34552.service: Deactivated successfully. Mar 7 00:53:20.364576 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:53:20.366793 systemd-logind[1459]: Removed session 6. Mar 7 00:53:20.470849 systemd[1]: Started sshd@7-88.99.14.23:22-20.161.92.111:59006.service - OpenSSH per-connection server daemon (20.161.92.111:59006). Mar 7 00:53:20.664516 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 00:53:20.676902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:20.792476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:20.797616 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:20.839296 kubelet[1728]: E0307 00:53:20.839226 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:20.842629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:20.842853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:21.054776 sshd[1718]: Accepted publickey for core from 20.161.92.111 port 59006 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:21.057791 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:21.064740 systemd-logind[1459]: New session 7 of user core. Mar 7 00:53:21.072233 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:53:21.379645 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:53:21.380007 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:21.679248 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:53:21.679366 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:53:21.930398 dockerd[1753]: time="2026-03-07T00:53:21.929828267Z" level=info msg="Starting up" Mar 7 00:53:22.040378 dockerd[1753]: time="2026-03-07T00:53:22.039979427Z" level=info msg="Loading containers: start." Mar 7 00:53:22.149036 kernel: Initializing XFRM netlink socket Mar 7 00:53:22.240019 systemd-networkd[1369]: docker0: Link UP Mar 7 00:53:22.254437 dockerd[1753]: time="2026-03-07T00:53:22.254360507Z" level=info msg="Loading containers: done." Mar 7 00:53:22.276574 dockerd[1753]: time="2026-03-07T00:53:22.275939227Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:53:22.276574 dockerd[1753]: time="2026-03-07T00:53:22.276112147Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:53:22.276574 dockerd[1753]: time="2026-03-07T00:53:22.276271347Z" level=info msg="Daemon has completed initialization" Mar 7 00:53:22.313777 dockerd[1753]: time="2026-03-07T00:53:22.313616867Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:53:22.313965 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:53:22.753846 containerd[1479]: time="2026-03-07T00:53:22.753712987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 7 00:53:23.327056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3429314666.mount: Deactivated successfully. Mar 7 00:53:23.348837 update_engine[1461]: I20260307 00:53:23.348465 1461 update_attempter.cc:509] Updating boot flags... Mar 7 00:53:23.403898 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1912) Mar 7 00:53:23.478907 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1774) Mar 7 00:53:24.290947 containerd[1479]: time="2026-03-07T00:53:24.290744587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:24.292761 containerd[1479]: time="2026-03-07T00:53:24.292709907Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701894" Mar 7 00:53:24.294115 containerd[1479]: time="2026-03-07T00:53:24.294061827Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:24.297850 containerd[1479]: time="2026-03-07T00:53:24.297802267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:24.300021 containerd[1479]: time="2026-03-07T00:53:24.299961227Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 1.54620756s" Mar 7 00:53:24.300021 containerd[1479]: time="2026-03-07T00:53:24.299999587Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 7 00:53:24.300566 containerd[1479]: time="2026-03-07T00:53:24.300531387Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 7 00:53:25.310901 containerd[1479]: time="2026-03-07T00:53:25.309245747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:25.310901 containerd[1479]: time="2026-03-07T00:53:25.310696987Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063059" Mar 7 00:53:25.310901 containerd[1479]: time="2026-03-07T00:53:25.310829947Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:25.314073 containerd[1479]: time="2026-03-07T00:53:25.314041427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:25.315324 containerd[1479]: time="2026-03-07T00:53:25.315292307Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 1.01472808s" Mar 7 00:53:25.315427 containerd[1479]: time="2026-03-07T00:53:25.315411627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 7 00:53:25.316428 containerd[1479]: time="2026-03-07T00:53:25.316400867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 7 00:53:25.707859 systemd[1]: Started sshd@8-88.99.14.23:22-172.104.241.92:53626.service - OpenSSH per-connection server daemon (172.104.241.92:53626). Mar 7 00:53:25.785989 sshd[1970]: Connection closed by 172.104.241.92 port 53626 Mar 7 00:53:25.788323 systemd[1]: sshd@8-88.99.14.23:22-172.104.241.92:53626.service: Deactivated successfully. Mar 7 00:53:25.922182 systemd[1]: Started sshd@10-88.99.14.23:22-172.104.241.92:53638.service - OpenSSH per-connection server daemon (172.104.241.92:53638). Mar 7 00:53:25.925851 systemd[1]: Started sshd@9-88.99.14.23:22-172.104.241.92:53636.service - OpenSSH per-connection server daemon (172.104.241.92:53636). Mar 7 00:53:26.405931 containerd[1479]: time="2026-03-07T00:53:26.405282867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:26.407486 containerd[1479]: time="2026-03-07T00:53:26.407154947Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797921" Mar 7 00:53:26.409320 containerd[1479]: time="2026-03-07T00:53:26.408716427Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:26.412764 containerd[1479]: time="2026-03-07T00:53:26.412700027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:26.414611 containerd[1479]: time="2026-03-07T00:53:26.414237027Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.09779736s" Mar 7 00:53:26.414611 containerd[1479]: time="2026-03-07T00:53:26.414282307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 7 00:53:26.415001 containerd[1479]: time="2026-03-07T00:53:26.414972227Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 7 00:53:26.672185 systemd[1]: Started sshd@11-88.99.14.23:22-172.104.241.92:53648.service - OpenSSH per-connection server daemon (172.104.241.92:53648). Mar 7 00:53:26.676226 systemd[1]: Started sshd@12-88.99.14.23:22-172.104.241.92:53658.service - OpenSSH per-connection server daemon (172.104.241.92:53658). Mar 7 00:53:26.681129 systemd[1]: Started sshd@13-88.99.14.23:22-172.104.241.92:53660.service - OpenSSH per-connection server daemon (172.104.241.92:53660). Mar 7 00:53:26.682335 systemd[1]: Started sshd@14-88.99.14.23:22-172.104.241.92:53670.service - OpenSSH per-connection server daemon (172.104.241.92:53670). Mar 7 00:53:26.708700 sshd[1975]: Connection closed by 172.104.241.92 port 53638 Mar 7 00:53:26.709199 sshd[1986]: Protocol major versions differ: 2 vs. 1 Mar 7 00:53:26.709783 sshd[1986]: banner exchange: Connection from 172.104.241.92 port 53658: could not read protocol version Mar 7 00:53:26.711300 systemd[1]: sshd@10-88.99.14.23:22-172.104.241.92:53638.service: Deactivated successfully. Mar 7 00:53:26.712767 systemd[1]: sshd@12-88.99.14.23:22-172.104.241.92:53658.service: Deactivated successfully. Mar 7 00:53:26.717199 sshd[1985]: Unable to negotiate with 172.104.241.92 port 53648: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 [preauth] Mar 7 00:53:26.717529 sshd[1988]: Protocol major versions differ: 2 vs. 1 Mar 7 00:53:26.717791 sshd[1988]: banner exchange: Connection from 172.104.241.92 port 53670: could not read protocol version Mar 7 00:53:26.719132 systemd[1]: sshd@14-88.99.14.23:22-172.104.241.92:53670.service: Deactivated successfully. Mar 7 00:53:26.720435 systemd[1]: sshd@11-88.99.14.23:22-172.104.241.92:53648.service: Deactivated successfully. Mar 7 00:53:26.903944 systemd[1]: Started sshd@15-88.99.14.23:22-172.104.241.92:53676.service - OpenSSH per-connection server daemon (172.104.241.92:53676). Mar 7 00:53:27.023384 sshd[2001]: Unable to negotiate with 172.104.241.92 port 53676: no matching host key type found. Their offer: ssh-dss [preauth] Mar 7 00:53:27.025932 systemd[1]: sshd@15-88.99.14.23:22-172.104.241.92:53676.service: Deactivated successfully. Mar 7 00:53:27.029236 sshd[1987]: Invalid user flvwg from 172.104.241.92 port 53660 Mar 7 00:53:27.122081 sshd[1987]: Connection closed by invalid user flvwg 172.104.241.92 port 53660 [preauth] Mar 7 00:53:27.126385 systemd[1]: sshd@13-88.99.14.23:22-172.104.241.92:53660.service: Deactivated successfully. Mar 7 00:53:27.144277 systemd[1]: Started sshd@16-88.99.14.23:22-172.104.241.92:53686.service - OpenSSH per-connection server daemon (172.104.241.92:53686). Mar 7 00:53:27.271854 sshd[2012]: Unable to negotiate with 172.104.241.92 port 53686: no matching MAC found. Their offer: hmac-md5,hmac-sha1,hmac-ripemd160 [preauth] Mar 7 00:53:27.275153 systemd[1]: sshd@16-88.99.14.23:22-172.104.241.92:53686.service: Deactivated successfully. Mar 7 00:53:27.382213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415471924.mount: Deactivated successfully. Mar 7 00:53:27.491059 systemd[1]: Started sshd@17-88.99.14.23:22-172.104.241.92:53704.service - OpenSSH per-connection server daemon (172.104.241.92:53704). Mar 7 00:53:27.624101 containerd[1479]: time="2026-03-07T00:53:27.623990587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:27.625407 containerd[1479]: time="2026-03-07T00:53:27.625102227Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329609" Mar 7 00:53:27.625674 containerd[1479]: time="2026-03-07T00:53:27.625649267Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:27.627977 containerd[1479]: time="2026-03-07T00:53:27.627941827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:27.628200 sshd[2021]: Unable to negotiate with 172.104.241.92 port 53704: no matching MAC found. Their offer: hmac-md5,hmac-sha1,hmac-ripemd160 [preauth] Mar 7 00:53:27.630761 systemd[1]: sshd@17-88.99.14.23:22-172.104.241.92:53704.service: Deactivated successfully. Mar 7 00:53:27.631230 containerd[1479]: time="2026-03-07T00:53:27.631204187Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.21529236s" Mar 7 00:53:27.631310 containerd[1479]: time="2026-03-07T00:53:27.631295867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 7 00:53:27.631796 containerd[1479]: time="2026-03-07T00:53:27.631774787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 7 00:53:27.843397 systemd[1]: Started sshd@18-88.99.14.23:22-172.104.241.92:53732.service - OpenSSH per-connection server daemon (172.104.241.92:53732). Mar 7 00:53:27.967762 sshd[2026]: Unable to negotiate with 172.104.241.92 port 53732: no matching host key type found. Their offer: ecdsa-sha2-nistp384 [preauth] Mar 7 00:53:27.969029 systemd[1]: sshd@18-88.99.14.23:22-172.104.241.92:53732.service: Deactivated successfully. Mar 7 00:53:28.091380 systemd[1]: Started sshd@19-88.99.14.23:22-172.104.241.92:53746.service - OpenSSH per-connection server daemon (172.104.241.92:53746). Mar 7 00:53:28.120939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655571199.mount: Deactivated successfully. Mar 7 00:53:28.218518 sshd[2031]: Unable to negotiate with 172.104.241.92 port 53746: no matching host key type found. Their offer: ecdsa-sha2-nistp521 [preauth] Mar 7 00:53:28.224090 systemd[1]: sshd@19-88.99.14.23:22-172.104.241.92:53746.service: Deactivated successfully. Mar 7 00:53:28.336471 systemd[1]: Started sshd@20-88.99.14.23:22-172.104.241.92:53762.service - OpenSSH per-connection server daemon (172.104.241.92:53762). Mar 7 00:53:28.466610 sshd[2048]: Unable to negotiate with 172.104.241.92 port 53762: no matching MAC found. Their offer: hmac-md5,hmac-sha1,hmac-ripemd160 [preauth] Mar 7 00:53:28.468591 systemd[1]: sshd@20-88.99.14.23:22-172.104.241.92:53762.service: Deactivated successfully. Mar 7 00:53:28.717996 sshd[1976]: Connection closed by 172.104.241.92 port 53636 [preauth] Mar 7 00:53:28.720173 systemd[1]: sshd@9-88.99.14.23:22-172.104.241.92:53636.service: Deactivated successfully. Mar 7 00:53:29.138216 containerd[1479]: time="2026-03-07T00:53:29.138052187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.140546 containerd[1479]: time="2026-03-07T00:53:29.140474267Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172309" Mar 7 00:53:29.142921 containerd[1479]: time="2026-03-07T00:53:29.141923067Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.147775 containerd[1479]: time="2026-03-07T00:53:29.147405707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.149811 containerd[1479]: time="2026-03-07T00:53:29.149384827Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 1.51742252s" Mar 7 00:53:29.149811 containerd[1479]: time="2026-03-07T00:53:29.149434507Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 7 00:53:29.150012 containerd[1479]: time="2026-03-07T00:53:29.149980187Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 7 00:53:29.579920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522535475.mount: Deactivated successfully. Mar 7 00:53:29.588351 containerd[1479]: time="2026-03-07T00:53:29.588283387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.589837 containerd[1479]: time="2026-03-07T00:53:29.589791947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268729" Mar 7 00:53:29.590742 containerd[1479]: time="2026-03-07T00:53:29.590433347Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.592969 containerd[1479]: time="2026-03-07T00:53:29.592936187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.594226 containerd[1479]: time="2026-03-07T00:53:29.594174227Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 444.15368ms" Mar 7 00:53:29.594226 containerd[1479]: time="2026-03-07T00:53:29.594215467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 7 00:53:29.595627 containerd[1479]: time="2026-03-07T00:53:29.595585867Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 7 00:53:30.097498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431679830.mount: Deactivated successfully. Mar 7 00:53:30.891914 containerd[1479]: time="2026-03-07T00:53:30.890279947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:30.892379 containerd[1479]: time="2026-03-07T00:53:30.891947747Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738239" Mar 7 00:53:30.892991 containerd[1479]: time="2026-03-07T00:53:30.892949547Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:30.897484 containerd[1479]: time="2026-03-07T00:53:30.897434507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:30.898994 containerd[1479]: time="2026-03-07T00:53:30.898960267Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.30313344s" Mar 7 00:53:30.899088 containerd[1479]: time="2026-03-07T00:53:30.899070507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 7 00:53:30.914404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 00:53:30.924131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:31.059082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:31.077481 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:31.124725 kubelet[2180]: E0307 00:53:31.124670 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:31.128110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:31.128274 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:33.247949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:33.258364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:33.287561 systemd[1]: Reloading requested from client PID 2204 ('systemctl') (unit session-7.scope)... Mar 7 00:53:33.287578 systemd[1]: Reloading... Mar 7 00:53:33.423934 zram_generator::config[2243]: No configuration found. Mar 7 00:53:33.509610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:33.587328 systemd[1]: Reloading finished in 299 ms. Mar 7 00:53:33.635802 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:53:33.636775 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:53:33.637453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:33.645409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:33.754243 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:33.769257 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:53:33.817968 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:53:34.329551 kubelet[2290]: I0307 00:53:34.329432 2290 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 00:53:34.329551 kubelet[2290]: I0307 00:53:34.329517 2290 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:53:34.331591 kubelet[2290]: I0307 00:53:34.331483 2290 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 00:53:34.331591 kubelet[2290]: I0307 00:53:34.331521 2290 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:53:34.332001 kubelet[2290]: I0307 00:53:34.331965 2290 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 00:53:34.343619 kubelet[2290]: I0307 00:53:34.343552 2290 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:53:34.344312 kubelet[2290]: E0307 00:53:34.344265 2290 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://88.99.14.23:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 88.99.14.23:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:53:34.349655 kubelet[2290]: E0307 00:53:34.349554 2290 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:53:34.349655 kubelet[2290]: I0307 00:53:34.349626 2290 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 00:53:34.352698 kubelet[2290]: I0307 00:53:34.352252 2290 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 00:53:34.353845 kubelet[2290]: I0307 00:53:34.353788 2290 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:53:34.354480 kubelet[2290]: I0307 00:53:34.354002 2290 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-d5610c1cbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:53:34.354691 kubelet[2290]: I0307 00:53:34.354673 2290 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 00:53:34.354761 kubelet[2290]: I0307 00:53:34.354751 2290 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 00:53:34.354969 kubelet[2290]: I0307 00:53:34.354952 2290 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 00:53:34.358103 kubelet[2290]: I0307 00:53:34.358080 2290 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 00:53:34.358424 kubelet[2290]: I0307 00:53:34.358410 2290 kubelet.go:482] "Attempting to sync node with API server" Mar 7 00:53:34.358534 kubelet[2290]: I0307 00:53:34.358486 2290 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:53:34.358606 kubelet[2290]: I0307 00:53:34.358597 2290 kubelet.go:394] "Adding apiserver pod source" Mar 7 00:53:34.358658 kubelet[2290]: I0307 00:53:34.358650 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:53:34.362941 kubelet[2290]: I0307 00:53:34.362115 2290 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:53:34.363361 kubelet[2290]: I0307 00:53:34.363344 2290 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:53:34.363450 kubelet[2290]: I0307 00:53:34.363439 2290 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 00:53:34.363564 kubelet[2290]: W0307 00:53:34.363553 2290 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:53:34.366112 kubelet[2290]: I0307 00:53:34.366097 2290 server.go:1257] "Started kubelet" Mar 7 00:53:34.367566 kubelet[2290]: I0307 00:53:34.367484 2290 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:53:34.372132 kubelet[2290]: I0307 00:53:34.372076 2290 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:53:34.372256 kubelet[2290]: I0307 00:53:34.372243 2290 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 00:53:34.372448 kubelet[2290]: I0307 00:53:34.372416 2290 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:53:34.372702 kubelet[2290]: I0307 00:53:34.372686 2290 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:53:34.375630 kubelet[2290]: I0307 00:53:34.375591 2290 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 00:53:34.380556 kubelet[2290]: E0307 00:53:34.379163 2290 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://88.99.14.23:6443/api/v1/namespaces/default/events\": dial tcp 88.99.14.23:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-d5610c1cbf.189a68f73a2edbeb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-d5610c1cbf,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-d5610c1cbf,},FirstTimestamp:2026-03-07 00:53:34.366071787 +0000 UTC m=+0.592794881,LastTimestamp:2026-03-07 00:53:34.366071787 +0000 UTC m=+0.592794881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-d5610c1cbf,}" Mar 7 00:53:34.381234 kubelet[2290]: I0307 00:53:34.381152 2290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:53:34.381508 kubelet[2290]: I0307 00:53:34.381482 2290 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 00:53:34.381861 kubelet[2290]: E0307 00:53:34.381673 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:34.382538 kubelet[2290]: E0307 00:53:34.382465 2290 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.14.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d5610c1cbf?timeout=10s\": dial tcp 88.99.14.23:6443: connect: connection refused" interval="200ms" Mar 7 00:53:34.382847 kubelet[2290]: I0307 00:53:34.382828 2290 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:53:34.383036 kubelet[2290]: I0307 00:53:34.383003 2290 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:53:34.385171 kubelet[2290]: I0307 00:53:34.385148 2290 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 00:53:34.385245 kubelet[2290]: I0307 00:53:34.385213 2290 reconciler.go:29] "Reconciler: start to sync state" Mar 7 00:53:34.388240 kubelet[2290]: I0307 00:53:34.388219 2290 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:53:34.396339 kubelet[2290]: I0307 00:53:34.396289 2290 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 00:53:34.397375 kubelet[2290]: I0307 00:53:34.397283 2290 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 00:53:34.397375 kubelet[2290]: I0307 00:53:34.397305 2290 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 00:53:34.397375 kubelet[2290]: I0307 00:53:34.397323 2290 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 00:53:34.397375 kubelet[2290]: E0307 00:53:34.397362 2290 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:53:34.405129 kubelet[2290]: E0307 00:53:34.405087 2290 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:53:34.416372 kubelet[2290]: I0307 00:53:34.416351 2290 cpu_manager.go:225] "Starting" policy="none" Mar 7 00:53:34.416706 kubelet[2290]: I0307 00:53:34.416567 2290 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 00:53:34.416706 kubelet[2290]: I0307 00:53:34.416591 2290 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 00:53:34.419789 kubelet[2290]: I0307 00:53:34.419475 2290 policy_none.go:50] "Start" Mar 7 00:53:34.419789 kubelet[2290]: I0307 00:53:34.419527 2290 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 00:53:34.419789 kubelet[2290]: I0307 00:53:34.419542 2290 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 00:53:34.420899 kubelet[2290]: I0307 00:53:34.420825 2290 policy_none.go:44] "Start" Mar 7 00:53:34.427891 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 00:53:34.440410 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 00:53:34.445209 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 00:53:34.453712 kubelet[2290]: E0307 00:53:34.453671 2290 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:53:34.454425 kubelet[2290]: I0307 00:53:34.454208 2290 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 00:53:34.454425 kubelet[2290]: I0307 00:53:34.454249 2290 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:53:34.455103 kubelet[2290]: I0307 00:53:34.454974 2290 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 00:53:34.458435 kubelet[2290]: E0307 00:53:34.458378 2290 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:53:34.458435 kubelet[2290]: E0307 00:53:34.458419 2290 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:34.515532 systemd[1]: Created slice kubepods-burstable-pode5818930988c7a1539ab5401bd97b167.slice - libcontainer container kubepods-burstable-pode5818930988c7a1539ab5401bd97b167.slice. Mar 7 00:53:34.524769 kubelet[2290]: E0307 00:53:34.524688 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.529471 systemd[1]: Created slice kubepods-burstable-podf75cdbada91783a73966f898dd0b2647.slice - libcontainer container kubepods-burstable-podf75cdbada91783a73966f898dd0b2647.slice. Mar 7 00:53:34.542916 kubelet[2290]: E0307 00:53:34.542401 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.546425 systemd[1]: Created slice kubepods-burstable-pod865b2e616385cd559867fa88bb7e6fbf.slice - libcontainer container kubepods-burstable-pod865b2e616385cd559867fa88bb7e6fbf.slice. Mar 7 00:53:34.548431 kubelet[2290]: E0307 00:53:34.548382 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.557342 kubelet[2290]: I0307 00:53:34.557278 2290 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.558028 kubelet[2290]: E0307 00:53:34.557968 2290 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://88.99.14.23:6443/api/v1/nodes\": dial tcp 88.99.14.23:6443: connect: connection refused" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.584936 kubelet[2290]: E0307 00:53:34.583799 2290 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.14.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d5610c1cbf?timeout=10s\": dial tcp 88.99.14.23:6443: connect: connection refused" interval="400ms" Mar 7 00:53:34.687257 kubelet[2290]: I0307 00:53:34.686995 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687257 kubelet[2290]: I0307 00:53:34.687042 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687257 kubelet[2290]: I0307 00:53:34.687060 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f75cdbada91783a73966f898dd0b2647-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-d5610c1cbf\" (UID: \"f75cdbada91783a73966f898dd0b2647\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687257 kubelet[2290]: I0307 00:53:34.687075 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/865b2e616385cd559867fa88bb7e6fbf-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" (UID: \"865b2e616385cd559867fa88bb7e6fbf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687257 kubelet[2290]: I0307 00:53:34.687091 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/865b2e616385cd559867fa88bb7e6fbf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" (UID: \"865b2e616385cd559867fa88bb7e6fbf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687518 kubelet[2290]: I0307 00:53:34.687117 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687518 kubelet[2290]: I0307 00:53:34.687135 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687518 kubelet[2290]: I0307 00:53:34.687161 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.687518 kubelet[2290]: I0307 00:53:34.687179 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/865b2e616385cd559867fa88bb7e6fbf-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" (UID: \"865b2e616385cd559867fa88bb7e6fbf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.760343 kubelet[2290]: I0307 00:53:34.760296 2290 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.760822 kubelet[2290]: E0307 00:53:34.760747 2290 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://88.99.14.23:6443/api/v1/nodes\": dial tcp 88.99.14.23:6443: connect: connection refused" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:34.829300 containerd[1479]: time="2026-03-07T00:53:34.829115507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-d5610c1cbf,Uid:e5818930988c7a1539ab5401bd97b167,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:34.846342 containerd[1479]: time="2026-03-07T00:53:34.845914187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-d5610c1cbf,Uid:f75cdbada91783a73966f898dd0b2647,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:34.851867 containerd[1479]: time="2026-03-07T00:53:34.851803027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-d5610c1cbf,Uid:865b2e616385cd559867fa88bb7e6fbf,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:34.985445 kubelet[2290]: E0307 00:53:34.985374 2290 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://88.99.14.23:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-d5610c1cbf?timeout=10s\": dial tcp 88.99.14.23:6443: connect: connection refused" interval="800ms" Mar 7 00:53:35.164341 kubelet[2290]: I0307 00:53:35.164048 2290 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:35.164462 kubelet[2290]: E0307 00:53:35.164392 2290 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://88.99.14.23:6443/api/v1/nodes\": dial tcp 88.99.14.23:6443: connect: connection refused" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:35.278647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163147863.mount: Deactivated successfully. Mar 7 00:53:35.284327 containerd[1479]: time="2026-03-07T00:53:35.284245907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:35.286112 containerd[1479]: time="2026-03-07T00:53:35.286059827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 7 00:53:35.288688 containerd[1479]: time="2026-03-07T00:53:35.288636867Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:35.290715 containerd[1479]: time="2026-03-07T00:53:35.290622947Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:35.291986 containerd[1479]: time="2026-03-07T00:53:35.291918987Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:35.293636 containerd[1479]: time="2026-03-07T00:53:35.293585427Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:53:35.297914 containerd[1479]: time="2026-03-07T00:53:35.296714947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:35.298014 containerd[1479]: time="2026-03-07T00:53:35.297977427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 468.7664ms" Mar 7 00:53:35.299904 containerd[1479]: time="2026-03-07T00:53:35.299856307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:53:35.303487 containerd[1479]: time="2026-03-07T00:53:35.303449147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 449.72952ms" Mar 7 00:53:35.304692 containerd[1479]: time="2026-03-07T00:53:35.304650787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.63764ms" Mar 7 00:53:35.409719 containerd[1479]: time="2026-03-07T00:53:35.409462307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:35.409719 containerd[1479]: time="2026-03-07T00:53:35.409533547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:35.409719 containerd[1479]: time="2026-03-07T00:53:35.409560507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:35.409719 containerd[1479]: time="2026-03-07T00:53:35.409640187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:35.413643 containerd[1479]: time="2026-03-07T00:53:35.413415587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:35.413643 containerd[1479]: time="2026-03-07T00:53:35.413514107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:35.413643 containerd[1479]: time="2026-03-07T00:53:35.413533467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:35.415122 containerd[1479]: time="2026-03-07T00:53:35.414413907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:35.419939 containerd[1479]: time="2026-03-07T00:53:35.419294707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:35.419939 containerd[1479]: time="2026-03-07T00:53:35.419343147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:35.419939 containerd[1479]: time="2026-03-07T00:53:35.419357867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:35.419939 containerd[1479]: time="2026-03-07T00:53:35.419427707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:35.441106 systemd[1]: Started cri-containerd-5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f.scope - libcontainer container 5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f. Mar 7 00:53:35.445566 systemd[1]: Started cri-containerd-e5337c1984cd246b21f490e5c13cbb041f3f12ca13813d4123084a44515c1d38.scope - libcontainer container e5337c1984cd246b21f490e5c13cbb041f3f12ca13813d4123084a44515c1d38. Mar 7 00:53:35.469574 systemd[1]: Started cri-containerd-8556dd17c215f8b52e852f3c9842e162385da0fface79978d57a70f6bf20fcb2.scope - libcontainer container 8556dd17c215f8b52e852f3c9842e162385da0fface79978d57a70f6bf20fcb2. Mar 7 00:53:35.505047 containerd[1479]: time="2026-03-07T00:53:35.505010227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-d5610c1cbf,Uid:e5818930988c7a1539ab5401bd97b167,Namespace:kube-system,Attempt:0,} returns sandbox id \"5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f\"" Mar 7 00:53:35.513480 containerd[1479]: time="2026-03-07T00:53:35.513419067Z" level=info msg="CreateContainer within sandbox \"5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:53:35.529372 containerd[1479]: time="2026-03-07T00:53:35.529270427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-d5610c1cbf,Uid:865b2e616385cd559867fa88bb7e6fbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"8556dd17c215f8b52e852f3c9842e162385da0fface79978d57a70f6bf20fcb2\"" Mar 7 00:53:35.533666 containerd[1479]: time="2026-03-07T00:53:35.533536627Z" level=info msg="CreateContainer within sandbox \"8556dd17c215f8b52e852f3c9842e162385da0fface79978d57a70f6bf20fcb2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:53:35.536211 containerd[1479]: time="2026-03-07T00:53:35.536086067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-d5610c1cbf,Uid:f75cdbada91783a73966f898dd0b2647,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5337c1984cd246b21f490e5c13cbb041f3f12ca13813d4123084a44515c1d38\"" Mar 7 00:53:35.540961 containerd[1479]: time="2026-03-07T00:53:35.540930747Z" level=info msg="CreateContainer within sandbox \"e5337c1984cd246b21f490e5c13cbb041f3f12ca13813d4123084a44515c1d38\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:53:35.541230 containerd[1479]: time="2026-03-07T00:53:35.541202067Z" level=info msg="CreateContainer within sandbox \"5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e\"" Mar 7 00:53:35.543265 containerd[1479]: time="2026-03-07T00:53:35.543234747Z" level=info msg="StartContainer for \"1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e\"" Mar 7 00:53:35.571332 containerd[1479]: time="2026-03-07T00:53:35.570921587Z" level=info msg="CreateContainer within sandbox \"e5337c1984cd246b21f490e5c13cbb041f3f12ca13813d4123084a44515c1d38\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d\"" Mar 7 00:53:35.571057 systemd[1]: Started cri-containerd-1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e.scope - libcontainer container 1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e. Mar 7 00:53:35.573023 containerd[1479]: time="2026-03-07T00:53:35.572871427Z" level=info msg="StartContainer for \"9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d\"" Mar 7 00:53:35.574034 containerd[1479]: time="2026-03-07T00:53:35.574004987Z" level=info msg="CreateContainer within sandbox \"8556dd17c215f8b52e852f3c9842e162385da0fface79978d57a70f6bf20fcb2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c54d7765030aa9d11c7ce1d5e59df11aa046ccc07b16c80d98a3c17d1e78c5ab\"" Mar 7 00:53:35.574431 containerd[1479]: time="2026-03-07T00:53:35.574332307Z" level=info msg="StartContainer for \"c54d7765030aa9d11c7ce1d5e59df11aa046ccc07b16c80d98a3c17d1e78c5ab\"" Mar 7 00:53:35.622066 systemd[1]: Started cri-containerd-c54d7765030aa9d11c7ce1d5e59df11aa046ccc07b16c80d98a3c17d1e78c5ab.scope - libcontainer container c54d7765030aa9d11c7ce1d5e59df11aa046ccc07b16c80d98a3c17d1e78c5ab. Mar 7 00:53:35.625473 systemd[1]: Started cri-containerd-9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d.scope - libcontainer container 9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d. Mar 7 00:53:35.633194 containerd[1479]: time="2026-03-07T00:53:35.633146067Z" level=info msg="StartContainer for \"1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e\" returns successfully" Mar 7 00:53:35.674773 containerd[1479]: time="2026-03-07T00:53:35.673232067Z" level=info msg="StartContainer for \"c54d7765030aa9d11c7ce1d5e59df11aa046ccc07b16c80d98a3c17d1e78c5ab\" returns successfully" Mar 7 00:53:35.706547 containerd[1479]: time="2026-03-07T00:53:35.706425707Z" level=info msg="StartContainer for \"9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d\" returns successfully" Mar 7 00:53:35.968167 kubelet[2290]: I0307 00:53:35.966862 2290 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:36.426858 kubelet[2290]: E0307 00:53:36.426652 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:36.428405 kubelet[2290]: E0307 00:53:36.428187 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:36.431440 kubelet[2290]: E0307 00:53:36.431292 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:36.924977 kubelet[2290]: E0307 00:53:36.924945 2290 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:37.019907 kubelet[2290]: I0307 00:53:37.019228 2290 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:37.020050 kubelet[2290]: E0307 00:53:37.019978 2290 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-d5610c1cbf\": node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.042215 kubelet[2290]: E0307 00:53:37.042171 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.142620 kubelet[2290]: E0307 00:53:37.142588 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.244024 kubelet[2290]: E0307 00:53:37.243643 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.344347 kubelet[2290]: E0307 00:53:37.344286 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.434161 kubelet[2290]: E0307 00:53:37.434111 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:37.434656 kubelet[2290]: E0307 00:53:37.434436 2290 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:37.445426 kubelet[2290]: E0307 00:53:37.445359 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.545783 kubelet[2290]: E0307 00:53:37.545617 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.646591 kubelet[2290]: E0307 00:53:37.646526 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.747723 kubelet[2290]: E0307 00:53:37.747671 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.848054 kubelet[2290]: E0307 00:53:37.847827 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:37.949772 kubelet[2290]: E0307 00:53:37.948872 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:38.049106 kubelet[2290]: E0307 00:53:38.049035 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:38.150401 kubelet[2290]: E0307 00:53:38.150165 2290 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-d5610c1cbf\" not found" Mar 7 00:53:38.282580 kubelet[2290]: I0307 00:53:38.282039 2290 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:38.293802 kubelet[2290]: I0307 00:53:38.293770 2290 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:38.298502 kubelet[2290]: I0307 00:53:38.298384 2290 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:38.364263 kubelet[2290]: I0307 00:53:38.364228 2290 apiserver.go:52] "Watching apiserver" Mar 7 00:53:38.386543 kubelet[2290]: I0307 00:53:38.386481 2290 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 00:53:38.436376 kubelet[2290]: I0307 00:53:38.436197 2290 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:38.451978 kubelet[2290]: E0307 00:53:38.451923 2290 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:39.220033 systemd[1]: Reloading requested from client PID 2577 ('systemctl') (unit session-7.scope)... Mar 7 00:53:39.220052 systemd[1]: Reloading... Mar 7 00:53:39.323908 zram_generator::config[2617]: No configuration found. Mar 7 00:53:39.415677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:39.508297 systemd[1]: Reloading finished in 287 ms. Mar 7 00:53:39.551079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:39.566513 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:53:39.566909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:39.575584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:39.713111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:39.723314 (kubelet)[2662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:53:39.788855 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:53:39.799404 kubelet[2662]: I0307 00:53:39.799119 2662 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 7 00:53:39.800916 kubelet[2662]: I0307 00:53:39.799596 2662 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:53:39.800916 kubelet[2662]: I0307 00:53:39.799630 2662 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 7 00:53:39.800916 kubelet[2662]: I0307 00:53:39.799636 2662 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:53:39.800916 kubelet[2662]: I0307 00:53:39.799942 2662 server.go:951] "Client rotation is on, will bootstrap in background" Mar 7 00:53:39.801261 kubelet[2662]: I0307 00:53:39.801216 2662 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:53:39.803989 kubelet[2662]: I0307 00:53:39.803943 2662 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:53:39.808193 kubelet[2662]: E0307 00:53:39.808154 2662 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:53:39.808335 kubelet[2662]: I0307 00:53:39.808212 2662 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 7 00:53:39.811590 kubelet[2662]: I0307 00:53:39.811528 2662 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 7 00:53:39.811750 kubelet[2662]: I0307 00:53:39.811710 2662 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:53:39.811976 kubelet[2662]: I0307 00:53:39.811734 2662 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-d5610c1cbf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:53:39.812088 kubelet[2662]: I0307 00:53:39.811983 2662 topology_manager.go:143] "Creating topology manager with none policy" Mar 7 00:53:39.812088 kubelet[2662]: I0307 00:53:39.811991 2662 container_manager_linux.go:308] "Creating device plugin manager" Mar 7 00:53:39.812088 kubelet[2662]: I0307 00:53:39.812015 2662 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 7 00:53:39.812326 kubelet[2662]: I0307 00:53:39.812201 2662 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 7 00:53:39.812326 kubelet[2662]: I0307 00:53:39.812329 2662 kubelet.go:482] "Attempting to sync node with API server" Mar 7 00:53:39.812437 kubelet[2662]: I0307 00:53:39.812342 2662 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:53:39.812437 kubelet[2662]: I0307 00:53:39.812368 2662 kubelet.go:394] "Adding apiserver pod source" Mar 7 00:53:39.812437 kubelet[2662]: I0307 00:53:39.812376 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:53:39.818900 kubelet[2662]: I0307 00:53:39.817564 2662 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:53:39.820886 kubelet[2662]: I0307 00:53:39.819720 2662 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:53:39.821003 kubelet[2662]: I0307 00:53:39.820989 2662 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 7 00:53:39.827103 kubelet[2662]: I0307 00:53:39.827079 2662 server.go:1257] "Started kubelet" Mar 7 00:53:39.829128 kubelet[2662]: I0307 00:53:39.829105 2662 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 7 00:53:39.846083 kubelet[2662]: I0307 00:53:39.846019 2662 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:53:39.847199 kubelet[2662]: I0307 00:53:39.847177 2662 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:53:39.855970 kubelet[2662]: I0307 00:53:39.854590 2662 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:53:39.855970 kubelet[2662]: I0307 00:53:39.854678 2662 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 7 00:53:39.855970 kubelet[2662]: I0307 00:53:39.854842 2662 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:53:39.855970 kubelet[2662]: I0307 00:53:39.855136 2662 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:53:39.859917 kubelet[2662]: I0307 00:53:39.858214 2662 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 7 00:53:39.859917 kubelet[2662]: I0307 00:53:39.858339 2662 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 7 00:53:39.859917 kubelet[2662]: I0307 00:53:39.858511 2662 reconciler.go:29] "Reconciler: start to sync state" Mar 7 00:53:39.862933 kubelet[2662]: I0307 00:53:39.861847 2662 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 7 00:53:39.862933 kubelet[2662]: I0307 00:53:39.862927 2662 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 7 00:53:39.863067 kubelet[2662]: I0307 00:53:39.862946 2662 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 7 00:53:39.863067 kubelet[2662]: I0307 00:53:39.862970 2662 kubelet.go:2501] "Starting kubelet main sync loop" Mar 7 00:53:39.863067 kubelet[2662]: E0307 00:53:39.863009 2662 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:53:39.867528 kubelet[2662]: I0307 00:53:39.866788 2662 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:53:39.867797 kubelet[2662]: I0307 00:53:39.867771 2662 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:53:39.873816 kubelet[2662]: I0307 00:53:39.873781 2662 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:53:39.875192 kubelet[2662]: E0307 00:53:39.875157 2662 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922097 2662 cpu_manager.go:225] "Starting" policy="none" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922114 2662 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922137 2662 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922266 2662 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922276 2662 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922294 2662 policy_none.go:50] "Start" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922302 2662 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922310 2662 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922399 2662 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 7 00:53:39.922659 kubelet[2662]: I0307 00:53:39.922410 2662 policy_none.go:44] "Start" Mar 7 00:53:39.929958 kubelet[2662]: E0307 00:53:39.929931 2662 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:53:39.930137 kubelet[2662]: I0307 00:53:39.930116 2662 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 7 00:53:39.930171 kubelet[2662]: I0307 00:53:39.930134 2662 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:53:39.930967 kubelet[2662]: I0307 00:53:39.930853 2662 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 7 00:53:39.935167 kubelet[2662]: E0307 00:53:39.935109 2662 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:53:39.964383 kubelet[2662]: I0307 00:53:39.964341 2662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:39.965492 kubelet[2662]: I0307 00:53:39.964793 2662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:39.965492 kubelet[2662]: I0307 00:53:39.964991 2662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:39.973857 kubelet[2662]: E0307 00:53:39.973797 2662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:39.974752 kubelet[2662]: E0307 00:53:39.974616 2662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-d5610c1cbf\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:39.975568 kubelet[2662]: E0307 00:53:39.975543 2662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.040409 kubelet[2662]: I0307 00:53:40.039438 2662 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.052346 kubelet[2662]: I0307 00:53:40.052309 2662 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.052498 kubelet[2662]: I0307 00:53:40.052432 2662 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060436 kubelet[2662]: I0307 00:53:40.060226 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060436 kubelet[2662]: I0307 00:53:40.060303 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060436 kubelet[2662]: I0307 00:53:40.060321 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/865b2e616385cd559867fa88bb7e6fbf-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" (UID: \"865b2e616385cd559867fa88bb7e6fbf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060436 kubelet[2662]: I0307 00:53:40.060338 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060436 kubelet[2662]: I0307 00:53:40.060377 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060672 kubelet[2662]: I0307 00:53:40.060400 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f75cdbada91783a73966f898dd0b2647-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-d5610c1cbf\" (UID: \"f75cdbada91783a73966f898dd0b2647\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.060672 kubelet[2662]: I0307 00:53:40.060413 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/865b2e616385cd559867fa88bb7e6fbf-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" (UID: \"865b2e616385cd559867fa88bb7e6fbf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.061895 kubelet[2662]: I0307 00:53:40.061800 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/865b2e616385cd559867fa88bb7e6fbf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" (UID: \"865b2e616385cd559867fa88bb7e6fbf\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.061895 kubelet[2662]: I0307 00:53:40.061845 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5818930988c7a1539ab5401bd97b167-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-d5610c1cbf\" (UID: \"e5818930988c7a1539ab5401bd97b167\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.814958 kubelet[2662]: I0307 00:53:40.814910 2662 apiserver.go:52] "Watching apiserver" Mar 7 00:53:40.859966 kubelet[2662]: I0307 00:53:40.859399 2662 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 7 00:53:40.913358 kubelet[2662]: I0307 00:53:40.911552 2662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.916808 kubelet[2662]: I0307 00:53:40.916378 2662 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.930791 kubelet[2662]: E0307 00:53:40.930579 2662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-d5610c1cbf\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:40.932069 kubelet[2662]: E0307 00:53:40.931921 2662 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-d5610c1cbf\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" Mar 7 00:53:41.609813 kubelet[2662]: I0307 00:53:41.609730 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-d5610c1cbf" podStartSLOduration=3.609692522 podStartE2EDuration="3.609692522s" podCreationTimestamp="2026-03-07 00:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:41.597232275 +0000 UTC m=+1.867973169" watchObservedRunningTime="2026-03-07 00:53:41.609692522 +0000 UTC m=+1.880433456" Mar 7 00:53:41.621984 kubelet[2662]: I0307 00:53:41.621916 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-d5610c1cbf" podStartSLOduration=3.621898629 podStartE2EDuration="3.621898629s" podCreationTimestamp="2026-03-07 00:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:41.611962858 +0000 UTC m=+1.882703752" watchObservedRunningTime="2026-03-07 00:53:41.621898629 +0000 UTC m=+1.892639523" Mar 7 00:53:44.524784 kubelet[2662]: I0307 00:53:44.524717 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-d5610c1cbf" podStartSLOduration=6.524663404 podStartE2EDuration="6.524663404s" podCreationTimestamp="2026-03-07 00:53:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:41.621842864 +0000 UTC m=+1.892583758" watchObservedRunningTime="2026-03-07 00:53:44.524663404 +0000 UTC m=+4.795404298" Mar 7 00:53:45.062987 kubelet[2662]: I0307 00:53:45.062935 2662 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:53:45.063628 containerd[1479]: time="2026-03-07T00:53:45.063369991Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:53:45.064900 kubelet[2662]: I0307 00:53:45.064529 2662 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:53:46.103751 systemd[1]: Created slice kubepods-besteffort-podfbed81e0_da5d_4de1_85b7_50214658639c.slice - libcontainer container kubepods-besteffort-podfbed81e0_da5d_4de1_85b7_50214658639c.slice. Mar 7 00:53:46.202897 kubelet[2662]: I0307 00:53:46.200665 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fbed81e0-da5d-4de1-85b7-50214658639c-kube-proxy\") pod \"kube-proxy-rhdtm\" (UID: \"fbed81e0-da5d-4de1-85b7-50214658639c\") " pod="kube-system/kube-proxy-rhdtm" Mar 7 00:53:46.202897 kubelet[2662]: I0307 00:53:46.200711 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbed81e0-da5d-4de1-85b7-50214658639c-xtables-lock\") pod \"kube-proxy-rhdtm\" (UID: \"fbed81e0-da5d-4de1-85b7-50214658639c\") " pod="kube-system/kube-proxy-rhdtm" Mar 7 00:53:46.202897 kubelet[2662]: I0307 00:53:46.200727 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbed81e0-da5d-4de1-85b7-50214658639c-lib-modules\") pod \"kube-proxy-rhdtm\" (UID: \"fbed81e0-da5d-4de1-85b7-50214658639c\") " pod="kube-system/kube-proxy-rhdtm" Mar 7 00:53:46.202897 kubelet[2662]: I0307 00:53:46.200745 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvzc8\" (UniqueName: \"kubernetes.io/projected/fbed81e0-da5d-4de1-85b7-50214658639c-kube-api-access-zvzc8\") pod \"kube-proxy-rhdtm\" (UID: \"fbed81e0-da5d-4de1-85b7-50214658639c\") " pod="kube-system/kube-proxy-rhdtm" Mar 7 00:53:46.368311 systemd[1]: Created slice kubepods-besteffort-podd8ffb3f5_314b_4084_bf68_2c459faf8921.slice - libcontainer container kubepods-besteffort-podd8ffb3f5_314b_4084_bf68_2c459faf8921.slice. Mar 7 00:53:46.403285 kubelet[2662]: I0307 00:53:46.403031 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8ffb3f5-314b-4084-bf68-2c459faf8921-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-hmg2l\" (UID: \"d8ffb3f5-314b-4084-bf68-2c459faf8921\") " pod="tigera-operator/tigera-operator-6cf4cccc57-hmg2l" Mar 7 00:53:46.403285 kubelet[2662]: I0307 00:53:46.403171 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvj9\" (UniqueName: \"kubernetes.io/projected/d8ffb3f5-314b-4084-bf68-2c459faf8921-kube-api-access-7dvj9\") pod \"tigera-operator-6cf4cccc57-hmg2l\" (UID: \"d8ffb3f5-314b-4084-bf68-2c459faf8921\") " pod="tigera-operator/tigera-operator-6cf4cccc57-hmg2l" Mar 7 00:53:46.417027 containerd[1479]: time="2026-03-07T00:53:46.416819870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhdtm,Uid:fbed81e0-da5d-4de1-85b7-50214658639c,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:46.443300 containerd[1479]: time="2026-03-07T00:53:46.443052024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:46.443300 containerd[1479]: time="2026-03-07T00:53:46.443123108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:46.443300 containerd[1479]: time="2026-03-07T00:53:46.443154110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:46.443466 containerd[1479]: time="2026-03-07T00:53:46.443249355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:46.462240 systemd[1]: run-containerd-runc-k8s.io-c1b44353d4cb538865639e465728f6f259ad33f17691ceec5342c4026a0e83c4-runc.JVjKGT.mount: Deactivated successfully. Mar 7 00:53:46.470131 systemd[1]: Started cri-containerd-c1b44353d4cb538865639e465728f6f259ad33f17691ceec5342c4026a0e83c4.scope - libcontainer container c1b44353d4cb538865639e465728f6f259ad33f17691ceec5342c4026a0e83c4. Mar 7 00:53:46.499363 containerd[1479]: time="2026-03-07T00:53:46.499324306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhdtm,Uid:fbed81e0-da5d-4de1-85b7-50214658639c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1b44353d4cb538865639e465728f6f259ad33f17691ceec5342c4026a0e83c4\"" Mar 7 00:53:46.507292 containerd[1479]: time="2026-03-07T00:53:46.506855049Z" level=info msg="CreateContainer within sandbox \"c1b44353d4cb538865639e465728f6f259ad33f17691ceec5342c4026a0e83c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:53:46.526176 containerd[1479]: time="2026-03-07T00:53:46.526123651Z" level=info msg="CreateContainer within sandbox \"c1b44353d4cb538865639e465728f6f259ad33f17691ceec5342c4026a0e83c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d77530b01d7584b5706a3228ec7f96c388293d01d9ca51c14dc289360edbcfe7\"" Mar 7 00:53:46.526948 containerd[1479]: time="2026-03-07T00:53:46.526921616Z" level=info msg="StartContainer for \"d77530b01d7584b5706a3228ec7f96c388293d01d9ca51c14dc289360edbcfe7\"" Mar 7 00:53:46.554232 systemd[1]: Started cri-containerd-d77530b01d7584b5706a3228ec7f96c388293d01d9ca51c14dc289360edbcfe7.scope - libcontainer container d77530b01d7584b5706a3228ec7f96c388293d01d9ca51c14dc289360edbcfe7. Mar 7 00:53:46.583495 containerd[1479]: time="2026-03-07T00:53:46.583386509Z" level=info msg="StartContainer for \"d77530b01d7584b5706a3228ec7f96c388293d01d9ca51c14dc289360edbcfe7\" returns successfully" Mar 7 00:53:46.675178 containerd[1479]: time="2026-03-07T00:53:46.674364621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-hmg2l,Uid:d8ffb3f5-314b-4084-bf68-2c459faf8921,Namespace:tigera-operator,Attempt:0,}" Mar 7 00:53:46.707303 containerd[1479]: time="2026-03-07T00:53:46.702645850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:46.707303 containerd[1479]: time="2026-03-07T00:53:46.702720334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:46.707303 containerd[1479]: time="2026-03-07T00:53:46.702732055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:46.707303 containerd[1479]: time="2026-03-07T00:53:46.702836740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:46.727111 systemd[1]: Started cri-containerd-40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f.scope - libcontainer container 40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f. Mar 7 00:53:46.764238 containerd[1479]: time="2026-03-07T00:53:46.763862969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-hmg2l,Uid:d8ffb3f5-314b-4084-bf68-2c459faf8921,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f\"" Mar 7 00:53:46.766615 containerd[1479]: time="2026-03-07T00:53:46.766377111Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 00:53:48.568001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665052535.mount: Deactivated successfully. Mar 7 00:53:48.956623 containerd[1479]: time="2026-03-07T00:53:48.956551677Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:48.958612 containerd[1479]: time="2026-03-07T00:53:48.958559017Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 00:53:48.959711 containerd[1479]: time="2026-03-07T00:53:48.959658671Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:48.962218 containerd[1479]: time="2026-03-07T00:53:48.962149874Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:48.963485 containerd[1479]: time="2026-03-07T00:53:48.962758304Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.196338711s" Mar 7 00:53:48.963485 containerd[1479]: time="2026-03-07T00:53:48.962791586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 00:53:48.969329 containerd[1479]: time="2026-03-07T00:53:48.969291747Z" level=info msg="CreateContainer within sandbox \"40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 00:53:48.991787 containerd[1479]: time="2026-03-07T00:53:48.991744495Z" level=info msg="CreateContainer within sandbox \"40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6\"" Mar 7 00:53:48.992717 containerd[1479]: time="2026-03-07T00:53:48.992686182Z" level=info msg="StartContainer for \"d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6\"" Mar 7 00:53:49.023589 systemd[1]: Started cri-containerd-d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6.scope - libcontainer container d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6. Mar 7 00:53:49.055484 containerd[1479]: time="2026-03-07T00:53:49.055269585Z" level=info msg="StartContainer for \"d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6\" returns successfully" Mar 7 00:53:49.951563 kubelet[2662]: I0307 00:53:49.950496 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-rhdtm" podStartSLOduration=3.95047487 podStartE2EDuration="3.95047487s" podCreationTimestamp="2026-03-07 00:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:46.939433754 +0000 UTC m=+7.210174648" watchObservedRunningTime="2026-03-07 00:53:49.95047487 +0000 UTC m=+10.221215764" Mar 7 00:53:51.663505 kubelet[2662]: I0307 00:53:51.663358 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-hmg2l" podStartSLOduration=3.4648252299999998 podStartE2EDuration="5.66332981s" podCreationTimestamp="2026-03-07 00:53:46 +0000 UTC" firstStartedPulling="2026-03-07 00:53:46.765984809 +0000 UTC m=+7.036725743" lastFinishedPulling="2026-03-07 00:53:48.964489429 +0000 UTC m=+9.235230323" observedRunningTime="2026-03-07 00:53:49.951161182 +0000 UTC m=+10.221902116" watchObservedRunningTime="2026-03-07 00:53:51.66332981 +0000 UTC m=+11.934070704" Mar 7 00:53:55.113315 sudo[1737]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:55.206897 sshd[1718]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:55.212291 systemd[1]: sshd@7-88.99.14.23:22-20.161.92.111:59006.service: Deactivated successfully. Mar 7 00:53:55.214757 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:53:55.214988 systemd[1]: session-7.scope: Consumed 4.539s CPU time, 153.4M memory peak, 0B memory swap peak. Mar 7 00:53:55.218932 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:53:55.220653 systemd-logind[1459]: Removed session 7. Mar 7 00:54:03.236224 systemd[1]: Created slice kubepods-besteffort-pod69096fed_c3e5_4bfd_a29a_3f44e622618a.slice - libcontainer container kubepods-besteffort-pod69096fed_c3e5_4bfd_a29a_3f44e622618a.slice. Mar 7 00:54:03.304931 kubelet[2662]: I0307 00:54:03.304861 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdpqf\" (UniqueName: \"kubernetes.io/projected/69096fed-c3e5-4bfd-a29a-3f44e622618a-kube-api-access-mdpqf\") pod \"calico-typha-555cbf4f6-59f5k\" (UID: \"69096fed-c3e5-4bfd-a29a-3f44e622618a\") " pod="calico-system/calico-typha-555cbf4f6-59f5k" Mar 7 00:54:03.305780 kubelet[2662]: I0307 00:54:03.305600 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69096fed-c3e5-4bfd-a29a-3f44e622618a-typha-certs\") pod \"calico-typha-555cbf4f6-59f5k\" (UID: \"69096fed-c3e5-4bfd-a29a-3f44e622618a\") " pod="calico-system/calico-typha-555cbf4f6-59f5k" Mar 7 00:54:03.305780 kubelet[2662]: I0307 00:54:03.305653 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69096fed-c3e5-4bfd-a29a-3f44e622618a-tigera-ca-bundle\") pod \"calico-typha-555cbf4f6-59f5k\" (UID: \"69096fed-c3e5-4bfd-a29a-3f44e622618a\") " pod="calico-system/calico-typha-555cbf4f6-59f5k" Mar 7 00:54:03.340004 systemd[1]: Created slice kubepods-besteffort-pod54388dc5_047a_4ede_99f9_aca5b2bb2fe1.slice - libcontainer container kubepods-besteffort-pod54388dc5_047a_4ede_99f9_aca5b2bb2fe1.slice. Mar 7 00:54:03.411521 kubelet[2662]: I0307 00:54:03.407007 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-lib-modules\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411521 kubelet[2662]: I0307 00:54:03.407079 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-xtables-lock\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411521 kubelet[2662]: I0307 00:54:03.407119 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-bpffs\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411521 kubelet[2662]: I0307 00:54:03.407159 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-policysync\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411521 kubelet[2662]: I0307 00:54:03.407215 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8hnd\" (UniqueName: \"kubernetes.io/projected/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-kube-api-access-w8hnd\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411837 kubelet[2662]: I0307 00:54:03.407326 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-cni-bin-dir\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411837 kubelet[2662]: I0307 00:54:03.407367 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-cni-log-dir\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411837 kubelet[2662]: I0307 00:54:03.407404 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-var-lib-calico\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411837 kubelet[2662]: I0307 00:54:03.407442 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-var-run-calico\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411837 kubelet[2662]: I0307 00:54:03.407478 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-cni-net-dir\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411968 kubelet[2662]: I0307 00:54:03.407514 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-flexvol-driver-host\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411968 kubelet[2662]: I0307 00:54:03.407552 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-nodeproc\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411968 kubelet[2662]: I0307 00:54:03.407583 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-node-certs\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411968 kubelet[2662]: I0307 00:54:03.407601 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-tigera-ca-bundle\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.411968 kubelet[2662]: I0307 00:54:03.407640 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/54388dc5-047a-4ede-99f9-aca5b2bb2fe1-sys-fs\") pod \"calico-node-7zlh2\" (UID: \"54388dc5-047a-4ede-99f9-aca5b2bb2fe1\") " pod="calico-system/calico-node-7zlh2" Mar 7 00:54:03.457711 kubelet[2662]: E0307 00:54:03.457497 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:03.508605 kubelet[2662]: I0307 00:54:03.508415 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3-socket-dir\") pod \"csi-node-driver-76chp\" (UID: \"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3\") " pod="calico-system/csi-node-driver-76chp" Mar 7 00:54:03.509515 kubelet[2662]: I0307 00:54:03.509479 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3-varrun\") pod \"csi-node-driver-76chp\" (UID: \"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3\") " pod="calico-system/csi-node-driver-76chp" Mar 7 00:54:03.509704 kubelet[2662]: I0307 00:54:03.509647 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx772\" (UniqueName: \"kubernetes.io/projected/56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3-kube-api-access-wx772\") pod \"csi-node-driver-76chp\" (UID: \"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3\") " pod="calico-system/csi-node-driver-76chp" Mar 7 00:54:03.510118 kubelet[2662]: I0307 00:54:03.510061 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3-registration-dir\") pod \"csi-node-driver-76chp\" (UID: \"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3\") " pod="calico-system/csi-node-driver-76chp" Mar 7 00:54:03.510790 kubelet[2662]: I0307 00:54:03.510705 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3-kubelet-dir\") pod \"csi-node-driver-76chp\" (UID: \"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3\") " pod="calico-system/csi-node-driver-76chp" Mar 7 00:54:03.520194 kubelet[2662]: E0307 00:54:03.520167 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.520366 kubelet[2662]: W0307 00:54:03.520348 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.520440 kubelet[2662]: E0307 00:54:03.520428 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.533839 kubelet[2662]: E0307 00:54:03.533806 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.533839 kubelet[2662]: W0307 00:54:03.533830 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.534015 kubelet[2662]: E0307 00:54:03.533852 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.545579 containerd[1479]: time="2026-03-07T00:54:03.545497855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555cbf4f6-59f5k,Uid:69096fed-c3e5-4bfd-a29a-3f44e622618a,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:03.583704 containerd[1479]: time="2026-03-07T00:54:03.582976598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:03.583704 containerd[1479]: time="2026-03-07T00:54:03.583413046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:03.583704 containerd[1479]: time="2026-03-07T00:54:03.583477887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:03.584185 containerd[1479]: time="2026-03-07T00:54:03.584013297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:03.612311 systemd[1]: Started cri-containerd-f6ccfbeafdc41fc266b21aefa15b81e5777ef63be604cc4fc55b020d8444f4c0.scope - libcontainer container f6ccfbeafdc41fc266b21aefa15b81e5777ef63be604cc4fc55b020d8444f4c0. Mar 7 00:54:03.615724 kubelet[2662]: E0307 00:54:03.615527 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.615724 kubelet[2662]: W0307 00:54:03.615547 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.615724 kubelet[2662]: E0307 00:54:03.615567 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.616049 kubelet[2662]: E0307 00:54:03.616001 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.616049 kubelet[2662]: W0307 00:54:03.616013 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.616049 kubelet[2662]: E0307 00:54:03.616024 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.616381 kubelet[2662]: E0307 00:54:03.616360 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.616381 kubelet[2662]: W0307 00:54:03.616375 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.616581 kubelet[2662]: E0307 00:54:03.616560 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.617965 kubelet[2662]: E0307 00:54:03.617939 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.617965 kubelet[2662]: W0307 00:54:03.617959 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.618070 kubelet[2662]: E0307 00:54:03.617972 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.618298 kubelet[2662]: E0307 00:54:03.618230 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.618298 kubelet[2662]: W0307 00:54:03.618292 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.618398 kubelet[2662]: E0307 00:54:03.618304 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.618559 kubelet[2662]: E0307 00:54:03.618543 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.618559 kubelet[2662]: W0307 00:54:03.618558 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.618626 kubelet[2662]: E0307 00:54:03.618569 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.618782 kubelet[2662]: E0307 00:54:03.618765 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.618782 kubelet[2662]: W0307 00:54:03.618777 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.618857 kubelet[2662]: E0307 00:54:03.618787 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.619360 kubelet[2662]: E0307 00:54:03.619333 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.619360 kubelet[2662]: W0307 00:54:03.619350 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.619360 kubelet[2662]: E0307 00:54:03.619362 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.619919 kubelet[2662]: E0307 00:54:03.619898 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.619919 kubelet[2662]: W0307 00:54:03.619915 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.620017 kubelet[2662]: E0307 00:54:03.619927 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.620462 kubelet[2662]: E0307 00:54:03.620439 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.620462 kubelet[2662]: W0307 00:54:03.620457 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.620548 kubelet[2662]: E0307 00:54:03.620471 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.621390 kubelet[2662]: E0307 00:54:03.621363 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.621390 kubelet[2662]: W0307 00:54:03.621380 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.621390 kubelet[2662]: E0307 00:54:03.621391 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.621819 kubelet[2662]: E0307 00:54:03.621796 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.621819 kubelet[2662]: W0307 00:54:03.621814 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.621919 kubelet[2662]: E0307 00:54:03.621826 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.622626 kubelet[2662]: E0307 00:54:03.622605 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.622626 kubelet[2662]: W0307 00:54:03.622621 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.622708 kubelet[2662]: E0307 00:54:03.622633 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.623427 kubelet[2662]: E0307 00:54:03.623335 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.623514 kubelet[2662]: W0307 00:54:03.623493 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.623554 kubelet[2662]: E0307 00:54:03.623514 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.624424 kubelet[2662]: E0307 00:54:03.624395 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.624782 kubelet[2662]: W0307 00:54:03.624756 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.624782 kubelet[2662]: E0307 00:54:03.624781 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.625447 kubelet[2662]: E0307 00:54:03.625424 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.625694 kubelet[2662]: W0307 00:54:03.625672 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.625742 kubelet[2662]: E0307 00:54:03.625696 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.627341 kubelet[2662]: E0307 00:54:03.627307 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.627341 kubelet[2662]: W0307 00:54:03.627327 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.627341 kubelet[2662]: E0307 00:54:03.627338 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.627587 kubelet[2662]: E0307 00:54:03.627553 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.627587 kubelet[2662]: W0307 00:54:03.627568 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.627587 kubelet[2662]: E0307 00:54:03.627578 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.627821 kubelet[2662]: E0307 00:54:03.627779 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.627821 kubelet[2662]: W0307 00:54:03.627802 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.627821 kubelet[2662]: E0307 00:54:03.627812 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.628085 kubelet[2662]: E0307 00:54:03.628070 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.628085 kubelet[2662]: W0307 00:54:03.628084 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.628159 kubelet[2662]: E0307 00:54:03.628095 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.628373 kubelet[2662]: E0307 00:54:03.628359 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.628373 kubelet[2662]: W0307 00:54:03.628371 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.628439 kubelet[2662]: E0307 00:54:03.628381 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.630142 kubelet[2662]: E0307 00:54:03.630112 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.630142 kubelet[2662]: W0307 00:54:03.630136 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.630231 kubelet[2662]: E0307 00:54:03.630148 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.630579 kubelet[2662]: E0307 00:54:03.630558 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.630579 kubelet[2662]: W0307 00:54:03.630574 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.630653 kubelet[2662]: E0307 00:54:03.630586 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.631972 kubelet[2662]: E0307 00:54:03.631951 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.631972 kubelet[2662]: W0307 00:54:03.631965 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.632058 kubelet[2662]: E0307 00:54:03.631977 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.632739 kubelet[2662]: E0307 00:54:03.632715 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.632739 kubelet[2662]: W0307 00:54:03.632730 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.632739 kubelet[2662]: E0307 00:54:03.632741 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.649936 containerd[1479]: time="2026-03-07T00:54:03.649536646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7zlh2,Uid:54388dc5-047a-4ede-99f9-aca5b2bb2fe1,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:03.652659 kubelet[2662]: E0307 00:54:03.652621 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:03.652659 kubelet[2662]: W0307 00:54:03.652642 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:03.652771 kubelet[2662]: E0307 00:54:03.652676 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:03.692906 containerd[1479]: time="2026-03-07T00:54:03.692041844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:03.696986 containerd[1479]: time="2026-03-07T00:54:03.694618732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:03.696986 containerd[1479]: time="2026-03-07T00:54:03.694647853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:03.696986 containerd[1479]: time="2026-03-07T00:54:03.694931498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:03.720079 systemd[1]: Started cri-containerd-42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7.scope - libcontainer container 42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7. Mar 7 00:54:03.722460 containerd[1479]: time="2026-03-07T00:54:03.722423054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555cbf4f6-59f5k,Uid:69096fed-c3e5-4bfd-a29a-3f44e622618a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6ccfbeafdc41fc266b21aefa15b81e5777ef63be604cc4fc55b020d8444f4c0\"" Mar 7 00:54:03.725621 containerd[1479]: time="2026-03-07T00:54:03.725493151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 00:54:03.819222 containerd[1479]: time="2026-03-07T00:54:03.817422195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7zlh2,Uid:54388dc5-047a-4ede-99f9-aca5b2bb2fe1,Namespace:calico-system,Attempt:0,} returns sandbox id \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\"" Mar 7 00:54:05.276931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255559947.mount: Deactivated successfully. Mar 7 00:54:05.864330 kubelet[2662]: E0307 00:54:05.863835 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:06.036757 containerd[1479]: time="2026-03-07T00:54:06.036679061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:06.038472 containerd[1479]: time="2026-03-07T00:54:06.038409968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 00:54:06.039616 containerd[1479]: time="2026-03-07T00:54:06.039543386Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:06.045499 containerd[1479]: time="2026-03-07T00:54:06.045439597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:06.046912 containerd[1479]: time="2026-03-07T00:54:06.046530294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.319353112s" Mar 7 00:54:06.046912 containerd[1479]: time="2026-03-07T00:54:06.046567054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 00:54:06.049079 containerd[1479]: time="2026-03-07T00:54:06.049051213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 00:54:06.065198 containerd[1479]: time="2026-03-07T00:54:06.065063540Z" level=info msg="CreateContainer within sandbox \"f6ccfbeafdc41fc266b21aefa15b81e5777ef63be604cc4fc55b020d8444f4c0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 00:54:06.095917 containerd[1479]: time="2026-03-07T00:54:06.095623212Z" level=info msg="CreateContainer within sandbox \"f6ccfbeafdc41fc266b21aefa15b81e5777ef63be604cc4fc55b020d8444f4c0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe6c582e47c8bb752eea709834cfe138898d7aeec6d8b044d522d68ee2d8bb17\"" Mar 7 00:54:06.097849 containerd[1479]: time="2026-03-07T00:54:06.096566307Z" level=info msg="StartContainer for \"fe6c582e47c8bb752eea709834cfe138898d7aeec6d8b044d522d68ee2d8bb17\"" Mar 7 00:54:06.132134 systemd[1]: Started cri-containerd-fe6c582e47c8bb752eea709834cfe138898d7aeec6d8b044d522d68ee2d8bb17.scope - libcontainer container fe6c582e47c8bb752eea709834cfe138898d7aeec6d8b044d522d68ee2d8bb17. Mar 7 00:54:06.165580 containerd[1479]: time="2026-03-07T00:54:06.165535373Z" level=info msg="StartContainer for \"fe6c582e47c8bb752eea709834cfe138898d7aeec6d8b044d522d68ee2d8bb17\" returns successfully" Mar 7 00:54:06.998627 kubelet[2662]: I0307 00:54:06.997924 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-555cbf4f6-59f5k" podStartSLOduration=1.675292871 podStartE2EDuration="3.997911878s" podCreationTimestamp="2026-03-07 00:54:03 +0000 UTC" firstStartedPulling="2026-03-07 00:54:03.725195066 +0000 UTC m=+23.995935960" lastFinishedPulling="2026-03-07 00:54:06.047814073 +0000 UTC m=+26.318554967" observedRunningTime="2026-03-07 00:54:06.997745515 +0000 UTC m=+27.268486449" watchObservedRunningTime="2026-03-07 00:54:06.997911878 +0000 UTC m=+27.268652772" Mar 7 00:54:07.019233 kubelet[2662]: E0307 00:54:07.019196 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.019778 kubelet[2662]: W0307 00:54:07.019281 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.019778 kubelet[2662]: E0307 00:54:07.019709 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.020457 kubelet[2662]: E0307 00:54:07.020310 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.020457 kubelet[2662]: W0307 00:54:07.020331 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.020457 kubelet[2662]: E0307 00:54:07.020369 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.021696 kubelet[2662]: E0307 00:54:07.021544 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.021696 kubelet[2662]: W0307 00:54:07.021567 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.021696 kubelet[2662]: E0307 00:54:07.021586 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.022181 kubelet[2662]: E0307 00:54:07.022148 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.022378 kubelet[2662]: W0307 00:54:07.022265 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.022378 kubelet[2662]: E0307 00:54:07.022309 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.022692 kubelet[2662]: E0307 00:54:07.022645 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.022692 kubelet[2662]: W0307 00:54:07.022655 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.022692 kubelet[2662]: E0307 00:54:07.022665 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.023017 kubelet[2662]: E0307 00:54:07.022961 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.023017 kubelet[2662]: W0307 00:54:07.022971 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.023017 kubelet[2662]: E0307 00:54:07.022981 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.023248 kubelet[2662]: E0307 00:54:07.023238 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.023415 kubelet[2662]: W0307 00:54:07.023320 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.023415 kubelet[2662]: E0307 00:54:07.023334 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.023710 kubelet[2662]: E0307 00:54:07.023638 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.023710 kubelet[2662]: W0307 00:54:07.023654 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.023710 kubelet[2662]: E0307 00:54:07.023664 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.024102 kubelet[2662]: E0307 00:54:07.024005 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.024102 kubelet[2662]: W0307 00:54:07.024020 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.024102 kubelet[2662]: E0307 00:54:07.024030 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.024330 kubelet[2662]: E0307 00:54:07.024260 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.024330 kubelet[2662]: W0307 00:54:07.024280 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.024330 kubelet[2662]: E0307 00:54:07.024293 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.024830 kubelet[2662]: E0307 00:54:07.024723 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.024830 kubelet[2662]: W0307 00:54:07.024733 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.024830 kubelet[2662]: E0307 00:54:07.024743 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.025446 kubelet[2662]: E0307 00:54:07.025357 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.025446 kubelet[2662]: W0307 00:54:07.025368 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.025446 kubelet[2662]: E0307 00:54:07.025378 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.026018 kubelet[2662]: E0307 00:54:07.026005 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.026093 kubelet[2662]: W0307 00:54:07.026081 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.026152 kubelet[2662]: E0307 00:54:07.026142 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.026494 kubelet[2662]: E0307 00:54:07.026399 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.026494 kubelet[2662]: W0307 00:54:07.026409 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.026494 kubelet[2662]: E0307 00:54:07.026419 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.027104 kubelet[2662]: E0307 00:54:07.026964 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.027104 kubelet[2662]: W0307 00:54:07.026977 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.027104 kubelet[2662]: E0307 00:54:07.026987 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.046498 kubelet[2662]: E0307 00:54:07.046322 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.046498 kubelet[2662]: W0307 00:54:07.046354 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.046498 kubelet[2662]: E0307 00:54:07.046384 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.047549 kubelet[2662]: E0307 00:54:07.047234 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.047549 kubelet[2662]: W0307 00:54:07.047257 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.047549 kubelet[2662]: E0307 00:54:07.047296 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.047985 kubelet[2662]: E0307 00:54:07.047711 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.047985 kubelet[2662]: W0307 00:54:07.047740 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.047985 kubelet[2662]: E0307 00:54:07.047761 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.048906 kubelet[2662]: E0307 00:54:07.048569 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.048906 kubelet[2662]: W0307 00:54:07.048840 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.048906 kubelet[2662]: E0307 00:54:07.048851 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.049090 kubelet[2662]: E0307 00:54:07.049074 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.049090 kubelet[2662]: W0307 00:54:07.049087 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.049164 kubelet[2662]: E0307 00:54:07.049096 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.049306 kubelet[2662]: E0307 00:54:07.049294 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.049306 kubelet[2662]: W0307 00:54:07.049305 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.049377 kubelet[2662]: E0307 00:54:07.049314 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.049511 kubelet[2662]: E0307 00:54:07.049500 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.049546 kubelet[2662]: W0307 00:54:07.049512 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.049546 kubelet[2662]: E0307 00:54:07.049521 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.049694 kubelet[2662]: E0307 00:54:07.049685 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.049694 kubelet[2662]: W0307 00:54:07.049694 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.049770 kubelet[2662]: E0307 00:54:07.049702 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.049870 kubelet[2662]: E0307 00:54:07.049857 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.049870 kubelet[2662]: W0307 00:54:07.049869 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.049870 kubelet[2662]: E0307 00:54:07.049907 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.050125 kubelet[2662]: E0307 00:54:07.050114 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.050125 kubelet[2662]: W0307 00:54:07.050125 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.050196 kubelet[2662]: E0307 00:54:07.050141 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.050526 kubelet[2662]: E0307 00:54:07.050514 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.050581 kubelet[2662]: W0307 00:54:07.050525 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.050581 kubelet[2662]: E0307 00:54:07.050568 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.050940 kubelet[2662]: E0307 00:54:07.050927 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.050940 kubelet[2662]: W0307 00:54:07.050939 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.051031 kubelet[2662]: E0307 00:54:07.050949 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.051455 kubelet[2662]: E0307 00:54:07.051438 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.051455 kubelet[2662]: W0307 00:54:07.051453 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.051531 kubelet[2662]: E0307 00:54:07.051463 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.053374 kubelet[2662]: E0307 00:54:07.053329 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.053374 kubelet[2662]: W0307 00:54:07.053346 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.053374 kubelet[2662]: E0307 00:54:07.053359 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.053891 kubelet[2662]: E0307 00:54:07.053786 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.053891 kubelet[2662]: W0307 00:54:07.053799 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.053891 kubelet[2662]: E0307 00:54:07.053809 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.054345 kubelet[2662]: E0307 00:54:07.054170 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.054345 kubelet[2662]: W0307 00:54:07.054182 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.054345 kubelet[2662]: E0307 00:54:07.054194 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.054631 kubelet[2662]: E0307 00:54:07.054531 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.054631 kubelet[2662]: W0307 00:54:07.054554 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.054631 kubelet[2662]: E0307 00:54:07.054568 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.055063 kubelet[2662]: E0307 00:54:07.055050 2662 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:07.055063 kubelet[2662]: W0307 00:54:07.055079 2662 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:07.055063 kubelet[2662]: E0307 00:54:07.055093 2662 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:07.747872 containerd[1479]: time="2026-03-07T00:54:07.747796786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:07.749842 containerd[1479]: time="2026-03-07T00:54:07.749779655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 00:54:07.751147 containerd[1479]: time="2026-03-07T00:54:07.751058473Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:07.756909 containerd[1479]: time="2026-03-07T00:54:07.755828502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:07.756909 containerd[1479]: time="2026-03-07T00:54:07.756705795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.707379338s" Mar 7 00:54:07.756909 containerd[1479]: time="2026-03-07T00:54:07.756735716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 00:54:07.763531 containerd[1479]: time="2026-03-07T00:54:07.763473493Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 00:54:07.778467 containerd[1479]: time="2026-03-07T00:54:07.778422230Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd\"" Mar 7 00:54:07.779650 containerd[1479]: time="2026-03-07T00:54:07.779622007Z" level=info msg="StartContainer for \"c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd\"" Mar 7 00:54:07.815107 systemd[1]: Started cri-containerd-c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd.scope - libcontainer container c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd. Mar 7 00:54:07.848719 containerd[1479]: time="2026-03-07T00:54:07.848630407Z" level=info msg="StartContainer for \"c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd\" returns successfully" Mar 7 00:54:07.864574 kubelet[2662]: E0307 00:54:07.863561 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:07.869438 systemd[1]: cri-containerd-c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd.scope: Deactivated successfully. Mar 7 00:54:07.900713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd-rootfs.mount: Deactivated successfully. Mar 7 00:54:07.990977 kubelet[2662]: I0307 00:54:07.989374 2662 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:08.007506 containerd[1479]: time="2026-03-07T00:54:08.007315340Z" level=info msg="shim disconnected" id=c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd namespace=k8s.io Mar 7 00:54:08.007800 containerd[1479]: time="2026-03-07T00:54:08.007747706Z" level=warning msg="cleaning up after shim disconnected" id=c05a643b5c757fc67a29e1666d7573a84d3b955e5d4c6fdb9a2697fff0b2c5bd namespace=k8s.io Mar 7 00:54:08.007999 containerd[1479]: time="2026-03-07T00:54:08.007927029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:08.997972 containerd[1479]: time="2026-03-07T00:54:08.997658193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 00:54:09.864212 kubelet[2662]: E0307 00:54:09.864169 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:11.864866 kubelet[2662]: E0307 00:54:11.864251 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:13.864688 kubelet[2662]: E0307 00:54:13.864212 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:15.865323 kubelet[2662]: E0307 00:54:15.865263 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:17.254206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2128364227.mount: Deactivated successfully. Mar 7 00:54:17.283445 containerd[1479]: time="2026-03-07T00:54:17.283375716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:17.284720 containerd[1479]: time="2026-03-07T00:54:17.284683806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 00:54:17.285967 containerd[1479]: time="2026-03-07T00:54:17.285917375Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:17.288477 containerd[1479]: time="2026-03-07T00:54:17.288426794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:17.289229 containerd[1479]: time="2026-03-07T00:54:17.289026519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 8.291325525s" Mar 7 00:54:17.289229 containerd[1479]: time="2026-03-07T00:54:17.289059639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 00:54:17.295067 containerd[1479]: time="2026-03-07T00:54:17.294867443Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 00:54:17.310066 containerd[1479]: time="2026-03-07T00:54:17.309938838Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758\"" Mar 7 00:54:17.311477 containerd[1479]: time="2026-03-07T00:54:17.311445649Z" level=info msg="StartContainer for \"3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758\"" Mar 7 00:54:17.350088 systemd[1]: Started cri-containerd-3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758.scope - libcontainer container 3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758. Mar 7 00:54:17.378363 containerd[1479]: time="2026-03-07T00:54:17.378288117Z" level=info msg="StartContainer for \"3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758\" returns successfully" Mar 7 00:54:17.498008 systemd[1]: cri-containerd-3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758.scope: Deactivated successfully. Mar 7 00:54:17.789796 containerd[1479]: time="2026-03-07T00:54:17.789642283Z" level=info msg="shim disconnected" id=3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758 namespace=k8s.io Mar 7 00:54:17.789796 containerd[1479]: time="2026-03-07T00:54:17.789730644Z" level=warning msg="cleaning up after shim disconnected" id=3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758 namespace=k8s.io Mar 7 00:54:17.789796 containerd[1479]: time="2026-03-07T00:54:17.789745044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:17.864434 kubelet[2662]: E0307 00:54:17.864056 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:18.019967 containerd[1479]: time="2026-03-07T00:54:18.019697863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 00:54:18.254426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6fc6a74e2204b5366dc1e9454442ca89f31aba4d299faf4ab3f82be4075758-rootfs.mount: Deactivated successfully. Mar 7 00:54:19.866926 kubelet[2662]: E0307 00:54:19.865680 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:21.825132 containerd[1479]: time="2026-03-07T00:54:21.825074432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:21.826933 containerd[1479]: time="2026-03-07T00:54:21.826813962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 00:54:21.828039 containerd[1479]: time="2026-03-07T00:54:21.827986329Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:21.831438 containerd[1479]: time="2026-03-07T00:54:21.831407709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:21.832553 containerd[1479]: time="2026-03-07T00:54:21.832518115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.812779932s" Mar 7 00:54:21.832717 containerd[1479]: time="2026-03-07T00:54:21.832697556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 00:54:21.838284 containerd[1479]: time="2026-03-07T00:54:21.838244669Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 00:54:21.856811 containerd[1479]: time="2026-03-07T00:54:21.856725377Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436\"" Mar 7 00:54:21.858717 containerd[1479]: time="2026-03-07T00:54:21.857535302Z" level=info msg="StartContainer for \"c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436\"" Mar 7 00:54:21.863982 kubelet[2662]: E0307 00:54:21.863938 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76chp" podUID="56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3" Mar 7 00:54:21.896240 systemd[1]: Started cri-containerd-c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436.scope - libcontainer container c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436. Mar 7 00:54:21.927798 containerd[1479]: time="2026-03-07T00:54:21.927325232Z" level=info msg="StartContainer for \"c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436\" returns successfully" Mar 7 00:54:22.485943 containerd[1479]: time="2026-03-07T00:54:22.485839973Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:54:22.491974 systemd[1]: cri-containerd-c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436.scope: Deactivated successfully. Mar 7 00:54:22.513546 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436-rootfs.mount: Deactivated successfully. Mar 7 00:54:22.542925 kubelet[2662]: I0307 00:54:22.542821 2662 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 7 00:54:22.606942 containerd[1479]: time="2026-03-07T00:54:22.605074789Z" level=info msg="shim disconnected" id=c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436 namespace=k8s.io Mar 7 00:54:22.606942 containerd[1479]: time="2026-03-07T00:54:22.605798833Z" level=warning msg="cleaning up after shim disconnected" id=c988737c595de4fc74b9492c6117513e142235d9f7c704925a5c4fa46d5bb436 namespace=k8s.io Mar 7 00:54:22.606942 containerd[1479]: time="2026-03-07T00:54:22.605823593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:22.637543 systemd[1]: Created slice kubepods-burstable-podd6b126d7_c5ec_4b1a_85ff_31aabc7601d3.slice - libcontainer container kubepods-burstable-podd6b126d7_c5ec_4b1a_85ff_31aabc7601d3.slice. Mar 7 00:54:22.653146 systemd[1]: Created slice kubepods-besteffort-podf0b2393b_a5c9_465b_b0f4_23d9615ad922.slice - libcontainer container kubepods-besteffort-podf0b2393b_a5c9_465b_b0f4_23d9615ad922.slice. Mar 7 00:54:22.662130 systemd[1]: Created slice kubepods-burstable-podb0d81311_37f7_4001_8962_252a531602c3.slice - libcontainer container kubepods-burstable-podb0d81311_37f7_4001_8962_252a531602c3.slice. Mar 7 00:54:22.668912 kubelet[2662]: I0307 00:54:22.668333 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj87n\" (UniqueName: \"kubernetes.io/projected/0d2b6f3a-719f-4e2a-8ee9-96f451414fe3-kube-api-access-kj87n\") pod \"calico-apiserver-75cc8bd959-pt8pw\" (UID: \"0d2b6f3a-719f-4e2a-8ee9-96f451414fe3\") " pod="calico-system/calico-apiserver-75cc8bd959-pt8pw" Mar 7 00:54:22.668912 kubelet[2662]: I0307 00:54:22.668390 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f6fc484b-21d3-4f5d-b25e-f88bbf89a117-goldmane-key-pair\") pod \"goldmane-9f7667bb8-whjl7\" (UID: \"f6fc484b-21d3-4f5d-b25e-f88bbf89a117\") " pod="calico-system/goldmane-9f7667bb8-whjl7" Mar 7 00:54:22.668912 kubelet[2662]: I0307 00:54:22.668454 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzjn\" (UniqueName: \"kubernetes.io/projected/f0b2393b-a5c9-465b-b0f4-23d9615ad922-kube-api-access-dhzjn\") pod \"calico-apiserver-75cc8bd959-lxvtj\" (UID: \"f0b2393b-a5c9-465b-b0f4-23d9615ad922\") " pod="calico-system/calico-apiserver-75cc8bd959-lxvtj" Mar 7 00:54:22.668912 kubelet[2662]: I0307 00:54:22.668472 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-backend-key-pair\") pod \"whisker-95c58bff6-2zzx6\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " pod="calico-system/whisker-95c58bff6-2zzx6" Mar 7 00:54:22.668912 kubelet[2662]: I0307 00:54:22.668487 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d2b6f3a-719f-4e2a-8ee9-96f451414fe3-calico-apiserver-certs\") pod \"calico-apiserver-75cc8bd959-pt8pw\" (UID: \"0d2b6f3a-719f-4e2a-8ee9-96f451414fe3\") " pod="calico-system/calico-apiserver-75cc8bd959-pt8pw" Mar 7 00:54:22.669187 kubelet[2662]: I0307 00:54:22.668507 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2x6\" (UniqueName: \"kubernetes.io/projected/f6fc484b-21d3-4f5d-b25e-f88bbf89a117-kube-api-access-nq2x6\") pod \"goldmane-9f7667bb8-whjl7\" (UID: \"f6fc484b-21d3-4f5d-b25e-f88bbf89a117\") " pod="calico-system/goldmane-9f7667bb8-whjl7" Mar 7 00:54:22.669187 kubelet[2662]: I0307 00:54:22.668533 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6b126d7-c5ec-4b1a-85ff-31aabc7601d3-config-volume\") pod \"coredns-7d764666f9-rvn46\" (UID: \"d6b126d7-c5ec-4b1a-85ff-31aabc7601d3\") " pod="kube-system/coredns-7d764666f9-rvn46" Mar 7 00:54:22.669187 kubelet[2662]: I0307 00:54:22.668571 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-ca-bundle\") pod \"whisker-95c58bff6-2zzx6\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " pod="calico-system/whisker-95c58bff6-2zzx6" Mar 7 00:54:22.669187 kubelet[2662]: I0307 00:54:22.668610 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6fc484b-21d3-4f5d-b25e-f88bbf89a117-config\") pod \"goldmane-9f7667bb8-whjl7\" (UID: \"f6fc484b-21d3-4f5d-b25e-f88bbf89a117\") " pod="calico-system/goldmane-9f7667bb8-whjl7" Mar 7 00:54:22.669187 kubelet[2662]: I0307 00:54:22.668627 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsd5t\" (UniqueName: \"kubernetes.io/projected/b0d81311-37f7-4001-8962-252a531602c3-kube-api-access-bsd5t\") pod \"coredns-7d764666f9-vwsss\" (UID: \"b0d81311-37f7-4001-8962-252a531602c3\") " pod="kube-system/coredns-7d764666f9-vwsss" Mar 7 00:54:22.669292 kubelet[2662]: I0307 00:54:22.668793 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-nginx-config\") pod \"whisker-95c58bff6-2zzx6\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " pod="calico-system/whisker-95c58bff6-2zzx6" Mar 7 00:54:22.669292 kubelet[2662]: I0307 00:54:22.668824 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xll9x\" (UniqueName: \"kubernetes.io/projected/d6b126d7-c5ec-4b1a-85ff-31aabc7601d3-kube-api-access-xll9x\") pod \"coredns-7d764666f9-rvn46\" (UID: \"d6b126d7-c5ec-4b1a-85ff-31aabc7601d3\") " pod="kube-system/coredns-7d764666f9-rvn46" Mar 7 00:54:22.669292 kubelet[2662]: I0307 00:54:22.668859 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34b5bba8-5d4e-4611-99bc-c577c12e794e-tigera-ca-bundle\") pod \"calico-kube-controllers-6777b486ff-5jxw7\" (UID: \"34b5bba8-5d4e-4611-99bc-c577c12e794e\") " pod="calico-system/calico-kube-controllers-6777b486ff-5jxw7" Mar 7 00:54:22.671481 kubelet[2662]: I0307 00:54:22.670180 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6fc484b-21d3-4f5d-b25e-f88bbf89a117-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-whjl7\" (UID: \"f6fc484b-21d3-4f5d-b25e-f88bbf89a117\") " pod="calico-system/goldmane-9f7667bb8-whjl7" Mar 7 00:54:22.671481 kubelet[2662]: I0307 00:54:22.670281 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0b2393b-a5c9-465b-b0f4-23d9615ad922-calico-apiserver-certs\") pod \"calico-apiserver-75cc8bd959-lxvtj\" (UID: \"f0b2393b-a5c9-465b-b0f4-23d9615ad922\") " pod="calico-system/calico-apiserver-75cc8bd959-lxvtj" Mar 7 00:54:22.671481 kubelet[2662]: I0307 00:54:22.670354 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9d2\" (UniqueName: \"kubernetes.io/projected/e0013408-ba16-4c5e-b2ac-2433326ba730-kube-api-access-kz9d2\") pod \"whisker-95c58bff6-2zzx6\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " pod="calico-system/whisker-95c58bff6-2zzx6" Mar 7 00:54:22.671481 kubelet[2662]: I0307 00:54:22.670407 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5z4\" (UniqueName: \"kubernetes.io/projected/34b5bba8-5d4e-4611-99bc-c577c12e794e-kube-api-access-dt5z4\") pod \"calico-kube-controllers-6777b486ff-5jxw7\" (UID: \"34b5bba8-5d4e-4611-99bc-c577c12e794e\") " pod="calico-system/calico-kube-controllers-6777b486ff-5jxw7" Mar 7 00:54:22.671481 kubelet[2662]: I0307 00:54:22.670461 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0d81311-37f7-4001-8962-252a531602c3-config-volume\") pod \"coredns-7d764666f9-vwsss\" (UID: \"b0d81311-37f7-4001-8962-252a531602c3\") " pod="kube-system/coredns-7d764666f9-vwsss" Mar 7 00:54:22.674188 systemd[1]: Created slice kubepods-besteffort-pod34b5bba8_5d4e_4611_99bc_c577c12e794e.slice - libcontainer container kubepods-besteffort-pod34b5bba8_5d4e_4611_99bc_c577c12e794e.slice. Mar 7 00:54:22.685196 systemd[1]: Created slice kubepods-besteffort-pode0013408_ba16_4c5e_b2ac_2433326ba730.slice - libcontainer container kubepods-besteffort-pode0013408_ba16_4c5e_b2ac_2433326ba730.slice. Mar 7 00:54:22.694872 systemd[1]: Created slice kubepods-besteffort-podf6fc484b_21d3_4f5d_b25e_f88bbf89a117.slice - libcontainer container kubepods-besteffort-podf6fc484b_21d3_4f5d_b25e_f88bbf89a117.slice. Mar 7 00:54:22.705257 systemd[1]: Created slice kubepods-besteffort-pod0d2b6f3a_719f_4e2a_8ee9_96f451414fe3.slice - libcontainer container kubepods-besteffort-pod0d2b6f3a_719f_4e2a_8ee9_96f451414fe3.slice. Mar 7 00:54:22.951387 containerd[1479]: time="2026-03-07T00:54:22.950920692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rvn46,Uid:d6b126d7-c5ec-4b1a-85ff-31aabc7601d3,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:22.962539 containerd[1479]: time="2026-03-07T00:54:22.960564745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-lxvtj,Uid:f0b2393b-a5c9-465b-b0f4-23d9615ad922,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:22.976231 containerd[1479]: time="2026-03-07T00:54:22.976192831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vwsss,Uid:b0d81311-37f7-4001-8962-252a531602c3,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:22.982515 containerd[1479]: time="2026-03-07T00:54:22.982262425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777b486ff-5jxw7,Uid:34b5bba8-5d4e-4611-99bc-c577c12e794e,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:22.994037 containerd[1479]: time="2026-03-07T00:54:22.993996849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95c58bff6-2zzx6,Uid:e0013408-ba16-4c5e-b2ac-2433326ba730,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:23.008943 containerd[1479]: time="2026-03-07T00:54:23.008902288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-whjl7,Uid:f6fc484b-21d3-4f5d-b25e-f88bbf89a117,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:23.012906 containerd[1479]: time="2026-03-07T00:54:23.012679188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-pt8pw,Uid:0d2b6f3a-719f-4e2a-8ee9-96f451414fe3,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:23.064055 containerd[1479]: time="2026-03-07T00:54:23.064017973Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 00:54:23.158206 containerd[1479]: time="2026-03-07T00:54:23.156287689Z" level=info msg="CreateContainer within sandbox \"42761a3cca092dfef31fbff5cce57f23b18b1d5a5e8c026ce81a9d4c6354c0a7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7205570600e78a64f0da3f594d641a27fb6923f26c4e3ce4d17046647121ff9\"" Mar 7 00:54:23.158206 containerd[1479]: time="2026-03-07T00:54:23.157517055Z" level=info msg="StartContainer for \"b7205570600e78a64f0da3f594d641a27fb6923f26c4e3ce4d17046647121ff9\"" Mar 7 00:54:23.177815 containerd[1479]: time="2026-03-07T00:54:23.177768120Z" level=error msg="Failed to destroy network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.178368 containerd[1479]: time="2026-03-07T00:54:23.178335683Z" level=error msg="encountered an error cleaning up failed sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.180004 containerd[1479]: time="2026-03-07T00:54:23.179954251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rvn46,Uid:d6b126d7-c5ec-4b1a-85ff-31aabc7601d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.182193 kubelet[2662]: E0307 00:54:23.182149 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.182514 kubelet[2662]: E0307 00:54:23.182222 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rvn46" Mar 7 00:54:23.182514 kubelet[2662]: E0307 00:54:23.182242 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rvn46" Mar 7 00:54:23.182514 kubelet[2662]: E0307 00:54:23.182289 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-rvn46_kube-system(d6b126d7-c5ec-4b1a-85ff-31aabc7601d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-rvn46_kube-system(d6b126d7-c5ec-4b1a-85ff-31aabc7601d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-rvn46" podUID="d6b126d7-c5ec-4b1a-85ff-31aabc7601d3" Mar 7 00:54:23.194407 containerd[1479]: time="2026-03-07T00:54:23.194336365Z" level=error msg="Failed to destroy network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.196389 containerd[1479]: time="2026-03-07T00:54:23.196185655Z" level=error msg="encountered an error cleaning up failed sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.196389 containerd[1479]: time="2026-03-07T00:54:23.196256935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-lxvtj,Uid:f0b2393b-a5c9-465b-b0f4-23d9615ad922,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.196565 kubelet[2662]: E0307 00:54:23.196515 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.196612 kubelet[2662]: E0307 00:54:23.196567 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75cc8bd959-lxvtj" Mar 7 00:54:23.196612 kubelet[2662]: E0307 00:54:23.196593 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75cc8bd959-lxvtj" Mar 7 00:54:23.196676 kubelet[2662]: E0307 00:54:23.196636 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75cc8bd959-lxvtj_calico-system(f0b2393b-a5c9-465b-b0f4-23d9615ad922)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75cc8bd959-lxvtj_calico-system(f0b2393b-a5c9-465b-b0f4-23d9615ad922)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75cc8bd959-lxvtj" podUID="f0b2393b-a5c9-465b-b0f4-23d9615ad922" Mar 7 00:54:23.244349 containerd[1479]: time="2026-03-07T00:54:23.244116702Z" level=error msg="Failed to destroy network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.247085 systemd[1]: Started cri-containerd-b7205570600e78a64f0da3f594d641a27fb6923f26c4e3ce4d17046647121ff9.scope - libcontainer container b7205570600e78a64f0da3f594d641a27fb6923f26c4e3ce4d17046647121ff9. Mar 7 00:54:23.248725 containerd[1479]: time="2026-03-07T00:54:23.248682685Z" level=error msg="encountered an error cleaning up failed sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.248870 containerd[1479]: time="2026-03-07T00:54:23.248848246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777b486ff-5jxw7,Uid:34b5bba8-5d4e-4611-99bc-c577c12e794e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.250099 kubelet[2662]: E0307 00:54:23.249955 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.250184 kubelet[2662]: E0307 00:54:23.250152 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6777b486ff-5jxw7" Mar 7 00:54:23.250184 kubelet[2662]: E0307 00:54:23.250176 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6777b486ff-5jxw7" Mar 7 00:54:23.251506 kubelet[2662]: E0307 00:54:23.250488 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6777b486ff-5jxw7_calico-system(34b5bba8-5d4e-4611-99bc-c577c12e794e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6777b486ff-5jxw7_calico-system(34b5bba8-5d4e-4611-99bc-c577c12e794e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6777b486ff-5jxw7" podUID="34b5bba8-5d4e-4611-99bc-c577c12e794e" Mar 7 00:54:23.267607 containerd[1479]: time="2026-03-07T00:54:23.267549983Z" level=error msg="Failed to destroy network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.268663 containerd[1479]: time="2026-03-07T00:54:23.268621828Z" level=error msg="encountered an error cleaning up failed sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.268750 containerd[1479]: time="2026-03-07T00:54:23.268684629Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vwsss,Uid:b0d81311-37f7-4001-8962-252a531602c3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.270035 kubelet[2662]: E0307 00:54:23.269995 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.270128 kubelet[2662]: E0307 00:54:23.270051 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-vwsss" Mar 7 00:54:23.270128 kubelet[2662]: E0307 00:54:23.270069 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-vwsss" Mar 7 00:54:23.270128 kubelet[2662]: E0307 00:54:23.270115 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-vwsss_kube-system(b0d81311-37f7-4001-8962-252a531602c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-vwsss_kube-system(b0d81311-37f7-4001-8962-252a531602c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-vwsss" podUID="b0d81311-37f7-4001-8962-252a531602c3" Mar 7 00:54:23.303323 containerd[1479]: time="2026-03-07T00:54:23.303253047Z" level=error msg="Failed to destroy network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.304054 containerd[1479]: time="2026-03-07T00:54:23.304020811Z" level=error msg="encountered an error cleaning up failed sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.305110 containerd[1479]: time="2026-03-07T00:54:23.305076856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-pt8pw,Uid:0d2b6f3a-719f-4e2a-8ee9-96f451414fe3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.305597 kubelet[2662]: E0307 00:54:23.305550 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.305669 kubelet[2662]: E0307 00:54:23.305620 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75cc8bd959-pt8pw" Mar 7 00:54:23.305669 kubelet[2662]: E0307 00:54:23.305641 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-75cc8bd959-pt8pw" Mar 7 00:54:23.305735 kubelet[2662]: E0307 00:54:23.305686 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75cc8bd959-pt8pw_calico-system(0d2b6f3a-719f-4e2a-8ee9-96f451414fe3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75cc8bd959-pt8pw_calico-system(0d2b6f3a-719f-4e2a-8ee9-96f451414fe3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-75cc8bd959-pt8pw" podUID="0d2b6f3a-719f-4e2a-8ee9-96f451414fe3" Mar 7 00:54:23.314705 containerd[1479]: time="2026-03-07T00:54:23.314659826Z" level=info msg="StartContainer for \"b7205570600e78a64f0da3f594d641a27fb6923f26c4e3ce4d17046647121ff9\" returns successfully" Mar 7 00:54:23.320380 containerd[1479]: time="2026-03-07T00:54:23.320274575Z" level=error msg="Failed to destroy network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.320872 containerd[1479]: time="2026-03-07T00:54:23.320840058Z" level=error msg="encountered an error cleaning up failed sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.321104 containerd[1479]: time="2026-03-07T00:54:23.321003419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-95c58bff6-2zzx6,Uid:e0013408-ba16-4c5e-b2ac-2433326ba730,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.321249 kubelet[2662]: E0307 00:54:23.321202 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.321309 kubelet[2662]: E0307 00:54:23.321284 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95c58bff6-2zzx6" Mar 7 00:54:23.322048 kubelet[2662]: E0307 00:54:23.321919 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-95c58bff6-2zzx6" Mar 7 00:54:23.322130 kubelet[2662]: E0307 00:54:23.322047 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-95c58bff6-2zzx6_calico-system(e0013408-ba16-4c5e-b2ac-2433326ba730)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-95c58bff6-2zzx6_calico-system(e0013408-ba16-4c5e-b2ac-2433326ba730)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-95c58bff6-2zzx6" podUID="e0013408-ba16-4c5e-b2ac-2433326ba730" Mar 7 00:54:23.326237 containerd[1479]: time="2026-03-07T00:54:23.325926524Z" level=error msg="Failed to destroy network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.326537 containerd[1479]: time="2026-03-07T00:54:23.326451327Z" level=error msg="encountered an error cleaning up failed sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.326537 containerd[1479]: time="2026-03-07T00:54:23.326520767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-whjl7,Uid:f6fc484b-21d3-4f5d-b25e-f88bbf89a117,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.326750 kubelet[2662]: E0307 00:54:23.326702 2662 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.326787 kubelet[2662]: E0307 00:54:23.326756 2662 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-whjl7" Mar 7 00:54:23.326787 kubelet[2662]: E0307 00:54:23.326775 2662 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-whjl7" Mar 7 00:54:23.327043 kubelet[2662]: E0307 00:54:23.326821 2662 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-whjl7_calico-system(f6fc484b-21d3-4f5d-b25e-f88bbf89a117)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-whjl7_calico-system(f6fc484b-21d3-4f5d-b25e-f88bbf89a117)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-whjl7" podUID="f6fc484b-21d3-4f5d-b25e-f88bbf89a117" Mar 7 00:54:23.863652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc-shm.mount: Deactivated successfully. Mar 7 00:54:23.863753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3-shm.mount: Deactivated successfully. Mar 7 00:54:23.873216 systemd[1]: Created slice kubepods-besteffort-pod56e3e3e6_d0e2_4ac5_ac85_ddeaf77d8df3.slice - libcontainer container kubepods-besteffort-pod56e3e3e6_d0e2_4ac5_ac85_ddeaf77d8df3.slice. Mar 7 00:54:23.880310 containerd[1479]: time="2026-03-07T00:54:23.880224544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76chp,Uid:56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:24.048642 kubelet[2662]: I0307 00:54:24.047569 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:24.051232 containerd[1479]: time="2026-03-07T00:54:24.051191530Z" level=info msg="StopPodSandbox for \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\"" Mar 7 00:54:24.051779 kubelet[2662]: I0307 00:54:24.051747 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:24.052075 containerd[1479]: time="2026-03-07T00:54:24.052051014Z" level=info msg="Ensure that sandbox 33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f in task-service has been cleanup successfully" Mar 7 00:54:24.053281 containerd[1479]: time="2026-03-07T00:54:24.052827898Z" level=info msg="StopPodSandbox for \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\"" Mar 7 00:54:24.053471 containerd[1479]: time="2026-03-07T00:54:24.053445541Z" level=info msg="Ensure that sandbox c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3 in task-service has been cleanup successfully" Mar 7 00:54:24.057469 kubelet[2662]: I0307 00:54:24.057429 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:24.059094 containerd[1479]: time="2026-03-07T00:54:24.059058608Z" level=info msg="StopPodSandbox for \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\"" Mar 7 00:54:24.059263 containerd[1479]: time="2026-03-07T00:54:24.059222369Z" level=info msg="Ensure that sandbox cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0 in task-service has been cleanup successfully" Mar 7 00:54:24.067993 kubelet[2662]: I0307 00:54:24.067929 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:24.077768 containerd[1479]: time="2026-03-07T00:54:24.077634378Z" level=info msg="StopPodSandbox for \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\"" Mar 7 00:54:24.081383 containerd[1479]: time="2026-03-07T00:54:24.078861984Z" level=info msg="Ensure that sandbox 92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3 in task-service has been cleanup successfully" Mar 7 00:54:24.088788 systemd-networkd[1369]: cali90eef1d5fe3: Link UP Mar 7 00:54:24.089865 systemd-networkd[1369]: cali90eef1d5fe3: Gained carrier Mar 7 00:54:24.119011 kubelet[2662]: I0307 00:54:24.118384 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:24.133579 kubelet[2662]: I0307 00:54:24.133493 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-7zlh2" podStartSLOduration=1.906366363 podStartE2EDuration="21.133479008s" podCreationTimestamp="2026-03-07 00:54:03 +0000 UTC" firstStartedPulling="2026-03-07 00:54:03.819727919 +0000 UTC m=+24.090468773" lastFinishedPulling="2026-03-07 00:54:23.046840524 +0000 UTC m=+43.317581418" observedRunningTime="2026-03-07 00:54:24.127528859 +0000 UTC m=+44.398269713" watchObservedRunningTime="2026-03-07 00:54:24.133479008 +0000 UTC m=+44.404219902" Mar 7 00:54:24.135552 containerd[1479]: time="2026-03-07T00:54:24.134125131Z" level=info msg="StopPodSandbox for \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\"" Mar 7 00:54:24.135552 containerd[1479]: time="2026-03-07T00:54:24.134294172Z" level=info msg="Ensure that sandbox b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6 in task-service has been cleanup successfully" Mar 7 00:54:24.138074 kubelet[2662]: I0307 00:54:24.137706 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:24.141448 containerd[1479]: time="2026-03-07T00:54:24.141411686Z" level=info msg="StopPodSandbox for \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\"" Mar 7 00:54:24.142947 containerd[1479]: time="2026-03-07T00:54:24.142584652Z" level=info msg="Ensure that sandbox acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5 in task-service has been cleanup successfully" Mar 7 00:54:24.153568 kubelet[2662]: I0307 00:54:24.153279 2662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:24.156462 containerd[1479]: time="2026-03-07T00:54:24.156424439Z" level=info msg="StopPodSandbox for \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\"" Mar 7 00:54:24.157418 containerd[1479]: time="2026-03-07T00:54:24.156740800Z" level=info msg="Ensure that sandbox 1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc in task-service has been cleanup successfully" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:23.918 [ERROR][3734] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:23.943 [INFO][3734] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0 csi-node-driver- calico-system 56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3 706 0 2026-03-07 00:54:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf csi-node-driver-76chp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali90eef1d5fe3 [] [] }} ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:23.943 [INFO][3734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:23.999 [INFO][3745] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" HandleID="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.010 [INFO][3745] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" HandleID="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c1b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"csi-node-driver-76chp", "timestamp":"2026-03-07 00:54:23.999392479 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000338000)} Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.010 [INFO][3745] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.011 [INFO][3745] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.011 [INFO][3745] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.014 [INFO][3745] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.020 [INFO][3745] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.025 [INFO][3745] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.029 [INFO][3745] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.032 [INFO][3745] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.032 [INFO][3745] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.034 [INFO][3745] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1 Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.039 [INFO][3745] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.049 [INFO][3745] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.193/26] block=192.168.42.192/26 handle="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.049 [INFO][3745] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.193/26] handle="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.049 [INFO][3745] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.162708 containerd[1479]: 2026-03-07 00:54:24.049 [INFO][3745] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.193/26] IPv6=[] ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" HandleID="k8s-pod-network.d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.163386 containerd[1479]: 2026-03-07 00:54:24.060 [INFO][3734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"csi-node-driver-76chp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90eef1d5fe3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:24.163386 containerd[1479]: 2026-03-07 00:54:24.061 [INFO][3734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.193/32] ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.163386 containerd[1479]: 2026-03-07 00:54:24.061 [INFO][3734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90eef1d5fe3 ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.163386 containerd[1479]: 2026-03-07 00:54:24.106 [INFO][3734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.163386 containerd[1479]: 2026-03-07 00:54:24.106 [INFO][3734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1", Pod:"csi-node-driver-76chp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali90eef1d5fe3", MAC:"76:ae:cb:d7:db:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:24.163386 containerd[1479]: 2026-03-07 00:54:24.143 [INFO][3734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1" Namespace="calico-system" Pod="csi-node-driver-76chp" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-csi--node--driver--76chp-eth0" Mar 7 00:54:24.235621 containerd[1479]: time="2026-03-07T00:54:24.235317140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:24.235621 containerd[1479]: time="2026-03-07T00:54:24.235385861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:24.235911 containerd[1479]: time="2026-03-07T00:54:24.235479021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:24.235911 containerd[1479]: time="2026-03-07T00:54:24.235578022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:24.301103 systemd[1]: Started cri-containerd-d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1.scope - libcontainer container d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1. Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.210 [INFO][3788] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.213 [INFO][3788] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" iface="eth0" netns="/var/run/netns/cni-5bd71f67-ac2b-2a6a-8338-d6a1748b896f" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.213 [INFO][3788] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" iface="eth0" netns="/var/run/netns/cni-5bd71f67-ac2b-2a6a-8338-d6a1748b896f" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.217 [INFO][3788] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" iface="eth0" netns="/var/run/netns/cni-5bd71f67-ac2b-2a6a-8338-d6a1748b896f" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.217 [INFO][3788] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.217 [INFO][3788] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.423 [INFO][3880] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.423 [INFO][3880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.423 [INFO][3880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.450 [WARNING][3880] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.450 [INFO][3880] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.459 [INFO][3880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.490469 containerd[1479]: 2026-03-07 00:54:24.464 [INFO][3788] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:24.493012 containerd[1479]: time="2026-03-07T00:54:24.492971747Z" level=info msg="TearDown network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\" successfully" Mar 7 00:54:24.493187 containerd[1479]: time="2026-03-07T00:54:24.493142267Z" level=info msg="StopPodSandbox for \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\" returns successfully" Mar 7 00:54:24.515837 containerd[1479]: time="2026-03-07T00:54:24.515705697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vwsss,Uid:b0d81311-37f7-4001-8962-252a531602c3,Namespace:kube-system,Attempt:1,}" Mar 7 00:54:24.529072 containerd[1479]: time="2026-03-07T00:54:24.529015401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76chp,Uid:56e3e3e6-d0e2-4ac5-ac85-ddeaf77d8df3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1\"" Mar 7 00:54:24.546674 containerd[1479]: time="2026-03-07T00:54:24.546550686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.329 [INFO][3781] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.330 [INFO][3781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" iface="eth0" netns="/var/run/netns/cni-3ed274eb-5d58-b20f-2dd3-3b81ff3712b1" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.330 [INFO][3781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" iface="eth0" netns="/var/run/netns/cni-3ed274eb-5d58-b20f-2dd3-3b81ff3712b1" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.338 [INFO][3781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" iface="eth0" netns="/var/run/netns/cni-3ed274eb-5d58-b20f-2dd3-3b81ff3712b1" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.338 [INFO][3781] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.338 [INFO][3781] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.485 [INFO][3924] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.486 [INFO][3924] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.486 [INFO][3924] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.508 [WARNING][3924] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.508 [INFO][3924] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.511 [INFO][3924] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.551734 containerd[1479]: 2026-03-07 00:54:24.546 [INFO][3781] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:24.554263 containerd[1479]: time="2026-03-07T00:54:24.554013282Z" level=info msg="TearDown network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\" successfully" Mar 7 00:54:24.554263 containerd[1479]: time="2026-03-07T00:54:24.554058402Z" level=info msg="StopPodSandbox for \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\" returns successfully" Mar 7 00:54:24.556938 containerd[1479]: time="2026-03-07T00:54:24.556896136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777b486ff-5jxw7,Uid:34b5bba8-5d4e-4611-99bc-c577c12e794e,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.330 [INFO][3780] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.330 [INFO][3780] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" iface="eth0" netns="/var/run/netns/cni-7be76f89-b7ea-b839-53f7-7f6b3d2ff2e8" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.331 [INFO][3780] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" iface="eth0" netns="/var/run/netns/cni-7be76f89-b7ea-b839-53f7-7f6b3d2ff2e8" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.336 [INFO][3780] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" iface="eth0" netns="/var/run/netns/cni-7be76f89-b7ea-b839-53f7-7f6b3d2ff2e8" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.336 [INFO][3780] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.336 [INFO][3780] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.563 [INFO][3922] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.564 [INFO][3922] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.564 [INFO][3922] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.593 [WARNING][3922] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.593 [INFO][3922] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.603 [INFO][3922] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.621465 containerd[1479]: 2026-03-07 00:54:24.612 [INFO][3780] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:24.623105 containerd[1479]: time="2026-03-07T00:54:24.622970055Z" level=info msg="TearDown network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\" successfully" Mar 7 00:54:24.623105 containerd[1479]: time="2026-03-07T00:54:24.623013096Z" level=info msg="StopPodSandbox for \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\" returns successfully" Mar 7 00:54:24.627610 containerd[1479]: time="2026-03-07T00:54:24.627457917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-whjl7,Uid:f6fc484b-21d3-4f5d-b25e-f88bbf89a117,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.402 [INFO][3815] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.402 [INFO][3815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" iface="eth0" netns="/var/run/netns/cni-997930de-04da-d1aa-78ea-1490c87b55d9" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.404 [INFO][3815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" iface="eth0" netns="/var/run/netns/cni-997930de-04da-d1aa-78ea-1490c87b55d9" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.404 [INFO][3815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" iface="eth0" netns="/var/run/netns/cni-997930de-04da-d1aa-78ea-1490c87b55d9" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.404 [INFO][3815] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.404 [INFO][3815] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.611 [INFO][3939] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.612 [INFO][3939] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.612 [INFO][3939] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.633 [WARNING][3939] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.633 [INFO][3939] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.637 [INFO][3939] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.646066 containerd[1479]: 2026-03-07 00:54:24.642 [INFO][3815] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:24.647043 containerd[1479]: time="2026-03-07T00:54:24.646973971Z" level=info msg="TearDown network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\" successfully" Mar 7 00:54:24.647130 containerd[1479]: time="2026-03-07T00:54:24.647115692Z" level=info msg="StopPodSandbox for \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\" returns successfully" Mar 7 00:54:24.649964 containerd[1479]: time="2026-03-07T00:54:24.649920466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rvn46,Uid:d6b126d7-c5ec-4b1a-85ff-31aabc7601d3,Namespace:kube-system,Attempt:1,}" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.429 [INFO][3858] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.429 [INFO][3858] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" iface="eth0" netns="/var/run/netns/cni-2555a0cb-c24d-cc16-647f-1f4beeba7384" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.429 [INFO][3858] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" iface="eth0" netns="/var/run/netns/cni-2555a0cb-c24d-cc16-647f-1f4beeba7384" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.429 [INFO][3858] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" iface="eth0" netns="/var/run/netns/cni-2555a0cb-c24d-cc16-647f-1f4beeba7384" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.429 [INFO][3858] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.429 [INFO][3858] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.652 [INFO][3945] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.652 [INFO][3945] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.652 [INFO][3945] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.681 [WARNING][3945] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.682 [INFO][3945] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.684 [INFO][3945] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.696989 containerd[1479]: 2026-03-07 00:54:24.689 [INFO][3858] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:24.706830 containerd[1479]: time="2026-03-07T00:54:24.706630900Z" level=info msg="TearDown network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\" successfully" Mar 7 00:54:24.707174 containerd[1479]: time="2026-03-07T00:54:24.707150023Z" level=info msg="StopPodSandbox for \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\" returns successfully" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.437 [INFO][3854] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.437 [INFO][3854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" iface="eth0" netns="/var/run/netns/cni-546a3c8d-e1a0-d645-86dc-c0fbfa80d5de" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.438 [INFO][3854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" iface="eth0" netns="/var/run/netns/cni-546a3c8d-e1a0-d645-86dc-c0fbfa80d5de" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.444 [INFO][3854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" iface="eth0" netns="/var/run/netns/cni-546a3c8d-e1a0-d645-86dc-c0fbfa80d5de" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.445 [INFO][3854] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.445 [INFO][3854] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.648 [INFO][3948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.654 [INFO][3948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.684 [INFO][3948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.709 [WARNING][3948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.709 [INFO][3948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.719 [INFO][3948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.753825 containerd[1479]: 2026-03-07 00:54:24.746 [INFO][3854] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.479 [INFO][3853] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.479 [INFO][3853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" iface="eth0" netns="/var/run/netns/cni-77f99b31-b5d4-0aa0-e451-d3dd73cb8024" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.481 [INFO][3853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" iface="eth0" netns="/var/run/netns/cni-77f99b31-b5d4-0aa0-e451-d3dd73cb8024" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.484 [INFO][3853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" iface="eth0" netns="/var/run/netns/cni-77f99b31-b5d4-0aa0-e451-d3dd73cb8024" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.484 [INFO][3853] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.484 [INFO][3853] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.693 [INFO][3957] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.694 [INFO][3957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.719 [INFO][3957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.749 [WARNING][3957] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.750 [INFO][3957] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.756 [INFO][3957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.771811 containerd[1479]: 2026-03-07 00:54:24.763 [INFO][3853] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:24.792131 kubelet[2662]: I0307 00:54:24.790762 2662 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-backend-key-pair\") pod \"e0013408-ba16-4c5e-b2ac-2433326ba730\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " Mar 7 00:54:24.792131 kubelet[2662]: I0307 00:54:24.790993 2662 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-ca-bundle\") pod \"e0013408-ba16-4c5e-b2ac-2433326ba730\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " Mar 7 00:54:24.792131 kubelet[2662]: I0307 00:54:24.791021 2662 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-nginx-config\" (UniqueName: \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-nginx-config\") pod \"e0013408-ba16-4c5e-b2ac-2433326ba730\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " Mar 7 00:54:24.792131 kubelet[2662]: I0307 00:54:24.791092 2662 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/e0013408-ba16-4c5e-b2ac-2433326ba730-kube-api-access-kz9d2\" (UniqueName: \"kubernetes.io/projected/e0013408-ba16-4c5e-b2ac-2433326ba730-kube-api-access-kz9d2\") pod \"e0013408-ba16-4c5e-b2ac-2433326ba730\" (UID: \"e0013408-ba16-4c5e-b2ac-2433326ba730\") " Mar 7 00:54:24.792131 kubelet[2662]: I0307 00:54:24.791866 2662 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-ca-bundle" pod "e0013408-ba16-4c5e-b2ac-2433326ba730" (UID: "e0013408-ba16-4c5e-b2ac-2433326ba730"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:54:24.794224 kubelet[2662]: I0307 00:54:24.792118 2662 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-ca-bundle\") on node \"ci-4081-3-6-n-d5610c1cbf\" DevicePath \"\"" Mar 7 00:54:24.794224 kubelet[2662]: I0307 00:54:24.792929 2662 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-nginx-config" pod "e0013408-ba16-4c5e-b2ac-2433326ba730" (UID: "e0013408-ba16-4c5e-b2ac-2433326ba730"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:54:24.836524 containerd[1479]: time="2026-03-07T00:54:24.836376128Z" level=info msg="TearDown network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\" successfully" Mar 7 00:54:24.836524 containerd[1479]: time="2026-03-07T00:54:24.836416888Z" level=info msg="StopPodSandbox for \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\" returns successfully" Mar 7 00:54:24.836691 kubelet[2662]: I0307 00:54:24.836399 2662 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-backend-key-pair" pod "e0013408-ba16-4c5e-b2ac-2433326ba730" (UID: "e0013408-ba16-4c5e-b2ac-2433326ba730"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:54:24.837702 containerd[1479]: time="2026-03-07T00:54:24.837073331Z" level=info msg="TearDown network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\" successfully" Mar 7 00:54:24.837702 containerd[1479]: time="2026-03-07T00:54:24.837097531Z" level=info msg="StopPodSandbox for \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\" returns successfully" Mar 7 00:54:24.838513 kubelet[2662]: I0307 00:54:24.838472 2662 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0013408-ba16-4c5e-b2ac-2433326ba730-kube-api-access-kz9d2" pod "e0013408-ba16-4c5e-b2ac-2433326ba730" (UID: "e0013408-ba16-4c5e-b2ac-2433326ba730"). InnerVolumeSpecName "kube-api-access-kz9d2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:54:24.841410 containerd[1479]: time="2026-03-07T00:54:24.840465027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-lxvtj,Uid:f0b2393b-a5c9-465b-b0f4-23d9615ad922,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:24.844902 containerd[1479]: time="2026-03-07T00:54:24.843833804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-pt8pw,Uid:0d2b6f3a-719f-4e2a-8ee9-96f451414fe3,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:24.869099 systemd[1]: run-netns-cni\x2d546a3c8d\x2de1a0\x2dd645\x2d86dc\x2dc0fbfa80d5de.mount: Deactivated successfully. Mar 7 00:54:24.869423 systemd[1]: run-netns-cni\x2d7be76f89\x2db7ea\x2db839\x2d53f7\x2d7f6b3d2ff2e8.mount: Deactivated successfully. Mar 7 00:54:24.869482 systemd[1]: run-netns-cni\x2d2555a0cb\x2dc24d\x2dcc16\x2d647f\x2d1f4beeba7384.mount: Deactivated successfully. Mar 7 00:54:24.869528 systemd[1]: run-netns-cni\x2d5bd71f67\x2dac2b\x2d2a6a\x2d8338\x2dd6a1748b896f.mount: Deactivated successfully. Mar 7 00:54:24.870030 systemd[1]: run-netns-cni\x2d3ed274eb\x2d5d58\x2db20f\x2d2dd3\x2d3b81ff3712b1.mount: Deactivated successfully. Mar 7 00:54:24.870104 systemd[1]: run-netns-cni\x2d77f99b31\x2db5d4\x2d0aa0\x2de451\x2dd3dd73cb8024.mount: Deactivated successfully. Mar 7 00:54:24.870157 systemd[1]: run-netns-cni\x2d997930de\x2d04da\x2dd1aa\x2d78ea\x2d1490c87b55d9.mount: Deactivated successfully. Mar 7 00:54:24.870204 systemd[1]: var-lib-kubelet-pods-e0013408\x2dba16\x2d4c5e\x2db2ac\x2d2433326ba730-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkz9d2.mount: Deactivated successfully. Mar 7 00:54:24.870257 systemd[1]: var-lib-kubelet-pods-e0013408\x2dba16\x2d4c5e\x2db2ac\x2d2433326ba730-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 00:54:24.902904 kubelet[2662]: I0307 00:54:24.898998 2662 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0013408-ba16-4c5e-b2ac-2433326ba730-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-d5610c1cbf\" DevicePath \"\"" Mar 7 00:54:24.902904 kubelet[2662]: I0307 00:54:24.899034 2662 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/e0013408-ba16-4c5e-b2ac-2433326ba730-nginx-config\") on node \"ci-4081-3-6-n-d5610c1cbf\" DevicePath \"\"" Mar 7 00:54:24.902904 kubelet[2662]: I0307 00:54:24.899043 2662 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kz9d2\" (UniqueName: \"kubernetes.io/projected/e0013408-ba16-4c5e-b2ac-2433326ba730-kube-api-access-kz9d2\") on node \"ci-4081-3-6-n-d5610c1cbf\" DevicePath \"\"" Mar 7 00:54:24.985021 systemd-networkd[1369]: cali3d94602808e: Link UP Mar 7 00:54:24.985662 systemd-networkd[1369]: cali3d94602808e: Gained carrier Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.653 [ERROR][3971] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.681 [INFO][3971] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0 coredns-7d764666f9- kube-system b0d81311-37f7-4001-8962-252a531602c3 889 0 2026-03-07 00:53:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf coredns-7d764666f9-vwsss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3d94602808e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.681 [INFO][3971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.816 [INFO][4031] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" HandleID="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.845 [INFO][4031] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" HandleID="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f3c20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"coredns-7d764666f9-vwsss", "timestamp":"2026-03-07 00:54:24.815868508 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b5b80)} Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.845 [INFO][4031] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.845 [INFO][4031] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.845 [INFO][4031] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.850 [INFO][4031] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.863 [INFO][4031] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.903 [INFO][4031] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.923 [INFO][4031] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.930 [INFO][4031] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.930 [INFO][4031] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.934 [INFO][4031] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40 Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.943 [INFO][4031] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.960 [INFO][4031] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.194/26] block=192.168.42.192/26 handle="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.960 [INFO][4031] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.194/26] handle="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.960 [INFO][4031] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.062535 containerd[1479]: 2026-03-07 00:54:24.960 [INFO][4031] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.194/26] IPv6=[] ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" HandleID="k8s-pod-network.cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.064080 containerd[1479]: 2026-03-07 00:54:24.971 [INFO][3971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b0d81311-37f7-4001-8962-252a531602c3", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"coredns-7d764666f9-vwsss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d94602808e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.064080 containerd[1479]: 2026-03-07 00:54:24.972 [INFO][3971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.194/32] ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.064080 containerd[1479]: 2026-03-07 00:54:24.973 [INFO][3971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3d94602808e ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.064080 containerd[1479]: 2026-03-07 00:54:24.990 [INFO][3971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.064080 containerd[1479]: 2026-03-07 00:54:25.003 [INFO][3971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b0d81311-37f7-4001-8962-252a531602c3", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40", Pod:"coredns-7d764666f9-vwsss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d94602808e", MAC:"46:42:57:f9:b4:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.064273 containerd[1479]: 2026-03-07 00:54:25.054 [INFO][3971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40" Namespace="kube-system" Pod="coredns-7d764666f9-vwsss" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:25.116357 systemd-networkd[1369]: cali715d8581b62: Link UP Mar 7 00:54:25.116568 systemd-networkd[1369]: cali715d8581b62: Gained carrier Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:24.827 [ERROR][4008] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:24.903 [INFO][4008] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0 coredns-7d764666f9- kube-system d6b126d7-c5ec-4b1a-85ff-31aabc7601d3 893 0 2026-03-07 00:53:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf coredns-7d764666f9-rvn46 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali715d8581b62 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:24.904 [INFO][4008] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.012 [INFO][4090] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" HandleID="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.041 [INFO][4090] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" HandleID="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003e2170), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"coredns-7d764666f9-rvn46", "timestamp":"2026-03-07 00:54:25.012970098 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003a4160)} Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.041 [INFO][4090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.041 [INFO][4090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.041 [INFO][4090] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.047 [INFO][4090] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.060 [INFO][4090] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.071 [INFO][4090] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.075 [INFO][4090] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.078 [INFO][4090] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.079 [INFO][4090] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.084 [INFO][4090] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9 Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.094 [INFO][4090] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.104 [INFO][4090] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.195/26] block=192.168.42.192/26 handle="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.104 [INFO][4090] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.195/26] handle="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.105 [INFO][4090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.171932 containerd[1479]: 2026-03-07 00:54:25.105 [INFO][4090] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.195/26] IPv6=[] ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" HandleID="k8s-pod-network.422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.174596 containerd[1479]: 2026-03-07 00:54:25.111 [INFO][4008] cni-plugin/k8s.go 418: Populated endpoint ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d6b126d7-c5ec-4b1a-85ff-31aabc7601d3", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"coredns-7d764666f9-rvn46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d8581b62", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.174596 containerd[1479]: 2026-03-07 00:54:25.111 [INFO][4008] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.195/32] ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.174596 containerd[1479]: 2026-03-07 00:54:25.111 [INFO][4008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali715d8581b62 ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.174596 containerd[1479]: 2026-03-07 00:54:25.117 [INFO][4008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.174596 containerd[1479]: 2026-03-07 00:54:25.127 [INFO][4008] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d6b126d7-c5ec-4b1a-85ff-31aabc7601d3", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9", Pod:"coredns-7d764666f9-rvn46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d8581b62", MAC:"1e:d2:fc:d1:61:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.174816 containerd[1479]: 2026-03-07 00:54:25.149 [INFO][4008] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9" Namespace="kube-system" Pod="coredns-7d764666f9-rvn46" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:25.197177 systemd[1]: Removed slice kubepods-besteffort-pode0013408_ba16_4c5e_b2ac_2433326ba730.slice - libcontainer container kubepods-besteffort-pode0013408_ba16_4c5e_b2ac_2433326ba730.slice. Mar 7 00:54:25.198895 containerd[1479]: time="2026-03-07T00:54:25.195652567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.198895 containerd[1479]: time="2026-03-07T00:54:25.195724487Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.198895 containerd[1479]: time="2026-03-07T00:54:25.195739807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.198895 containerd[1479]: time="2026-03-07T00:54:25.195910928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.316380 systemd-networkd[1369]: cali7b05a14c8e6: Link UP Mar 7 00:54:25.320062 systemd-networkd[1369]: cali7b05a14c8e6: Gained carrier Mar 7 00:54:25.349899 containerd[1479]: time="2026-03-07T00:54:25.349582545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.349899 containerd[1479]: time="2026-03-07T00:54:25.349645905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.349899 containerd[1479]: time="2026-03-07T00:54:25.349687305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.349899 containerd[1479]: time="2026-03-07T00:54:25.349845146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:24.764 [ERROR][3994] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:24.808 [INFO][3994] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0 goldmane-9f7667bb8- calico-system f6fc484b-21d3-4f5d-b25e-f88bbf89a117 890 0 2026-03-07 00:54:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf goldmane-9f7667bb8-whjl7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7b05a14c8e6 [] [] }} ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:24.808 [INFO][3994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.040 [INFO][4066] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" HandleID="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.070 [INFO][4066] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" HandleID="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003630d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"goldmane-9f7667bb8-whjl7", "timestamp":"2026-03-07 00:54:25.039995581 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004d6840)} Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.070 [INFO][4066] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.106 [INFO][4066] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.107 [INFO][4066] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.156 [INFO][4066] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.174 [INFO][4066] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.185 [INFO][4066] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.192 [INFO][4066] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.199 [INFO][4066] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.200 [INFO][4066] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.206 [INFO][4066] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542 Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.246 [INFO][4066] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.273 [INFO][4066] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.196/26] block=192.168.42.192/26 handle="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.273 [INFO][4066] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.196/26] handle="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.273 [INFO][4066] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.395224 containerd[1479]: 2026-03-07 00:54:25.273 [INFO][4066] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.196/26] IPv6=[] ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" HandleID="k8s-pod-network.263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.397067 containerd[1479]: 2026-03-07 00:54:25.284 [INFO][3994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f6fc484b-21d3-4f5d-b25e-f88bbf89a117", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"goldmane-9f7667bb8-whjl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b05a14c8e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.397067 containerd[1479]: 2026-03-07 00:54:25.284 [INFO][3994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.196/32] ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.397067 containerd[1479]: 2026-03-07 00:54:25.284 [INFO][3994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b05a14c8e6 ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.397067 containerd[1479]: 2026-03-07 00:54:25.323 [INFO][3994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.397067 containerd[1479]: 2026-03-07 00:54:25.325 [INFO][3994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f6fc484b-21d3-4f5d-b25e-f88bbf89a117", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542", Pod:"goldmane-9f7667bb8-whjl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b05a14c8e6", MAC:"0e:ef:d1:c0:0c:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.397067 containerd[1479]: 2026-03-07 00:54:25.389 [INFO][3994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542" Namespace="calico-system" Pod="goldmane-9f7667bb8-whjl7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:25.407351 systemd[1]: Started cri-containerd-cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40.scope - libcontainer container cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40. Mar 7 00:54:25.419520 systemd[1]: Started cri-containerd-422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9.scope - libcontainer container 422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9. Mar 7 00:54:25.434961 systemd[1]: Created slice kubepods-besteffort-pod6455c007_d992_42f8_aef7_b7ff752f7a18.slice - libcontainer container kubepods-besteffort-pod6455c007_d992_42f8_aef7_b7ff752f7a18.slice. Mar 7 00:54:25.501969 containerd[1479]: time="2026-03-07T00:54:25.500803430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.501969 containerd[1479]: time="2026-03-07T00:54:25.500902071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.501969 containerd[1479]: time="2026-03-07T00:54:25.500918351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.502276 containerd[1479]: time="2026-03-07T00:54:25.501913755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.524439 kubelet[2662]: I0307 00:54:25.524176 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6455c007-d992-42f8-aef7-b7ff752f7a18-nginx-config\") pod \"whisker-75fb54fcd5-hktf8\" (UID: \"6455c007-d992-42f8-aef7-b7ff752f7a18\") " pod="calico-system/whisker-75fb54fcd5-hktf8" Mar 7 00:54:25.528091 kubelet[2662]: I0307 00:54:25.527434 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6455c007-d992-42f8-aef7-b7ff752f7a18-whisker-backend-key-pair\") pod \"whisker-75fb54fcd5-hktf8\" (UID: \"6455c007-d992-42f8-aef7-b7ff752f7a18\") " pod="calico-system/whisker-75fb54fcd5-hktf8" Mar 7 00:54:25.528815 kubelet[2662]: I0307 00:54:25.528655 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6455c007-d992-42f8-aef7-b7ff752f7a18-whisker-ca-bundle\") pod \"whisker-75fb54fcd5-hktf8\" (UID: \"6455c007-d992-42f8-aef7-b7ff752f7a18\") " pod="calico-system/whisker-75fb54fcd5-hktf8" Mar 7 00:54:25.528815 kubelet[2662]: I0307 00:54:25.528712 2662 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v642g\" (UniqueName: \"kubernetes.io/projected/6455c007-d992-42f8-aef7-b7ff752f7a18-kube-api-access-v642g\") pod \"whisker-75fb54fcd5-hktf8\" (UID: \"6455c007-d992-42f8-aef7-b7ff752f7a18\") " pod="calico-system/whisker-75fb54fcd5-hktf8" Mar 7 00:54:25.544234 systemd-networkd[1369]: calia49e43bdc11: Link UP Mar 7 00:54:25.546701 systemd-networkd[1369]: calia49e43bdc11: Gained carrier Mar 7 00:54:25.565611 systemd[1]: Started cri-containerd-263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542.scope - libcontainer container 263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542. Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:24.994 [ERROR][4083] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.039 [INFO][4083] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0 calico-apiserver-75cc8bd959- calico-system 0d2b6f3a-719f-4e2a-8ee9-96f451414fe3 895 0 2026-03-07 00:54:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75cc8bd959 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf calico-apiserver-75cc8bd959-pt8pw eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia49e43bdc11 [] [] }} ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.039 [INFO][4083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.122 [INFO][4140] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" HandleID="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.144 [INFO][4140] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" HandleID="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000381d30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"calico-apiserver-75cc8bd959-pt8pw", "timestamp":"2026-03-07 00:54:25.122425274 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001862c0)} Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.148 [INFO][4140] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.281 [INFO][4140] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.282 [INFO][4140] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.329 [INFO][4140] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.404 [INFO][4140] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.432 [INFO][4140] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.438 [INFO][4140] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.454 [INFO][4140] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.455 [INFO][4140] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.462 [INFO][4140] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.490 [INFO][4140] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.508 [INFO][4140] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.197/26] block=192.168.42.192/26 handle="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.508 [INFO][4140] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.197/26] handle="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.508 [INFO][4140] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.579757 containerd[1479]: 2026-03-07 00:54:25.509 [INFO][4140] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.197/26] IPv6=[] ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" HandleID="k8s-pod-network.fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.580390 containerd[1479]: 2026-03-07 00:54:25.521 [INFO][4083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"0d2b6f3a-719f-4e2a-8ee9-96f451414fe3", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"calico-apiserver-75cc8bd959-pt8pw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia49e43bdc11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.580390 containerd[1479]: 2026-03-07 00:54:25.523 [INFO][4083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.197/32] ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.580390 containerd[1479]: 2026-03-07 00:54:25.523 [INFO][4083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia49e43bdc11 ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.580390 containerd[1479]: 2026-03-07 00:54:25.546 [INFO][4083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.580390 containerd[1479]: 2026-03-07 00:54:25.550 [INFO][4083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"0d2b6f3a-719f-4e2a-8ee9-96f451414fe3", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d", Pod:"calico-apiserver-75cc8bd959-pt8pw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia49e43bdc11", MAC:"52:6a:d0:79:b3:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.580390 containerd[1479]: 2026-03-07 00:54:25.571 [INFO][4083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-pt8pw" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:25.599361 systemd-networkd[1369]: calia17b01ae894: Link UP Mar 7 00:54:25.599593 systemd-networkd[1369]: calia17b01ae894: Gained carrier Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:24.768 [ERROR][3988] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:24.813 [INFO][3988] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0 calico-kube-controllers-6777b486ff- calico-system 34b5bba8-5d4e-4611-99bc-c577c12e794e 891 0 2026-03-07 00:54:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6777b486ff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf calico-kube-controllers-6777b486ff-5jxw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia17b01ae894 [] [] }} ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:24.818 [INFO][3988] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.130 [INFO][4074] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" HandleID="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.175 [INFO][4074] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" HandleID="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f6950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"calico-kube-controllers-6777b486ff-5jxw7", "timestamp":"2026-03-07 00:54:25.130129829 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40005ca420)} Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.175 [INFO][4074] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.510 [INFO][4074] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.510 [INFO][4074] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.523 [INFO][4074] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.536 [INFO][4074] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.553 [INFO][4074] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.556 [INFO][4074] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.559 [INFO][4074] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.559 [INFO][4074] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.563 [INFO][4074] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466 Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.579 [INFO][4074] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.591 [INFO][4074] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.198/26] block=192.168.42.192/26 handle="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.591 [INFO][4074] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.198/26] handle="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.591 [INFO][4074] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.630609 containerd[1479]: 2026-03-07 00:54:25.591 [INFO][4074] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.198/26] IPv6=[] ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" HandleID="k8s-pod-network.e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.631496 containerd[1479]: 2026-03-07 00:54:25.594 [INFO][3988] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0", GenerateName:"calico-kube-controllers-6777b486ff-", Namespace:"calico-system", SelfLink:"", UID:"34b5bba8-5d4e-4611-99bc-c577c12e794e", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777b486ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"calico-kube-controllers-6777b486ff-5jxw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17b01ae894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.631496 containerd[1479]: 2026-03-07 00:54:25.594 [INFO][3988] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.198/32] ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.631496 containerd[1479]: 2026-03-07 00:54:25.594 [INFO][3988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia17b01ae894 ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.631496 containerd[1479]: 2026-03-07 00:54:25.600 [INFO][3988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.631496 containerd[1479]: 2026-03-07 00:54:25.601 [INFO][3988] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0", GenerateName:"calico-kube-controllers-6777b486ff-", Namespace:"calico-system", SelfLink:"", UID:"34b5bba8-5d4e-4611-99bc-c577c12e794e", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777b486ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466", Pod:"calico-kube-controllers-6777b486ff-5jxw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17b01ae894", MAC:"62:fe:d9:8e:06:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.631496 containerd[1479]: 2026-03-07 00:54:25.627 [INFO][3988] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466" Namespace="calico-system" Pod="calico-kube-controllers-6777b486ff-5jxw7" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:25.667516 containerd[1479]: time="2026-03-07T00:54:25.666134940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.667516 containerd[1479]: time="2026-03-07T00:54:25.666202180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.667516 containerd[1479]: time="2026-03-07T00:54:25.666226700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.667516 containerd[1479]: time="2026-03-07T00:54:25.666332021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.720965 systemd[1]: Started cri-containerd-fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d.scope - libcontainer container fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d. Mar 7 00:54:25.726630 containerd[1479]: time="2026-03-07T00:54:25.726202572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-vwsss,Uid:b0d81311-37f7-4001-8962-252a531602c3,Namespace:kube-system,Attempt:1,} returns sandbox id \"cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40\"" Mar 7 00:54:25.747173 containerd[1479]: time="2026-03-07T00:54:25.746120423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75fb54fcd5-hktf8,Uid:6455c007-d992-42f8-aef7-b7ff752f7a18,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:25.767905 containerd[1479]: time="2026-03-07T00:54:25.767342679Z" level=info msg="CreateContainer within sandbox \"cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:25.794539 containerd[1479]: time="2026-03-07T00:54:25.794161761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.794539 containerd[1479]: time="2026-03-07T00:54:25.794227481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.794539 containerd[1479]: time="2026-03-07T00:54:25.794247241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.794539 containerd[1479]: time="2026-03-07T00:54:25.794395682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.808982 containerd[1479]: time="2026-03-07T00:54:25.808931148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rvn46,Uid:d6b126d7-c5ec-4b1a-85ff-31aabc7601d3,Namespace:kube-system,Attempt:1,} returns sandbox id \"422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9\"" Mar 7 00:54:25.829720 systemd-networkd[1369]: cali92c16f77f46: Link UP Mar 7 00:54:25.835754 systemd-networkd[1369]: cali92c16f77f46: Gained carrier Mar 7 00:54:25.838281 containerd[1479]: time="2026-03-07T00:54:25.838153920Z" level=info msg="CreateContainer within sandbox \"422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:25.887772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1159067354.mount: Deactivated successfully. Mar 7 00:54:25.891891 containerd[1479]: time="2026-03-07T00:54:25.891593202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-whjl7,Uid:f6fc484b-21d3-4f5d-b25e-f88bbf89a117,Namespace:calico-system,Attempt:1,} returns sandbox id \"263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542\"" Mar 7 00:54:25.900196 kubelet[2662]: I0307 00:54:25.899364 2662 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e0013408-ba16-4c5e-b2ac-2433326ba730" path="/var/lib/kubelet/pods/e0013408-ba16-4c5e-b2ac-2433326ba730/volumes" Mar 7 00:54:25.908174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446607737.mount: Deactivated successfully. Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.193 [ERROR][4092] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.277 [INFO][4092] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0 calico-apiserver-75cc8bd959- calico-system f0b2393b-a5c9-465b-b0f4-23d9615ad922 896 0 2026-03-07 00:54:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75cc8bd959 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf calico-apiserver-75cc8bd959-lxvtj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali92c16f77f46 [] [] }} ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.277 [INFO][4092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.474 [INFO][4255] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" HandleID="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.495 [INFO][4255] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" HandleID="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031dee0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"calico-apiserver-75cc8bd959-lxvtj", "timestamp":"2026-03-07 00:54:25.474702552 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003ca420)} Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.495 [INFO][4255] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.592 [INFO][4255] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.592 [INFO][4255] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.619 [INFO][4255] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.641 [INFO][4255] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.663 [INFO][4255] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.670 [INFO][4255] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.710 [INFO][4255] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.710 [INFO][4255] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.734 [INFO][4255] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.762 [INFO][4255] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.796 [INFO][4255] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.199/26] block=192.168.42.192/26 handle="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.796 [INFO][4255] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.199/26] handle="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.796 [INFO][4255] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.914731 containerd[1479]: 2026-03-07 00:54:25.796 [INFO][4255] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.199/26] IPv6=[] ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" HandleID="k8s-pod-network.e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.915457 containerd[1479]: 2026-03-07 00:54:25.814 [INFO][4092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"f0b2393b-a5c9-465b-b0f4-23d9615ad922", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"calico-apiserver-75cc8bd959-lxvtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali92c16f77f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.915457 containerd[1479]: 2026-03-07 00:54:25.816 [INFO][4092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.199/32] ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.915457 containerd[1479]: 2026-03-07 00:54:25.816 [INFO][4092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92c16f77f46 ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.915457 containerd[1479]: 2026-03-07 00:54:25.840 [INFO][4092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.915457 containerd[1479]: 2026-03-07 00:54:25.847 [INFO][4092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"f0b2393b-a5c9-465b-b0f4-23d9615ad922", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc", Pod:"calico-apiserver-75cc8bd959-lxvtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali92c16f77f46", MAC:"ea:6d:bc:ba:25:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.915457 containerd[1479]: 2026-03-07 00:54:25.878 [INFO][4092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc" Namespace="calico-system" Pod="calico-apiserver-75cc8bd959-lxvtj" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:25.923125 systemd-networkd[1369]: cali90eef1d5fe3: Gained IPv6LL Mar 7 00:54:25.949698 containerd[1479]: time="2026-03-07T00:54:25.947665417Z" level=info msg="CreateContainer within sandbox \"cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6c0259338e3fcb3a9ca516a9986f068053b0299a45bffbc2148c44857c3c05b\"" Mar 7 00:54:25.950089 containerd[1479]: time="2026-03-07T00:54:25.949386904Z" level=info msg="CreateContainer within sandbox \"422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4177c4f0371c2b52d5d4c31338ac620bc3b068bec9f3d4bcfc5a7720156d31aa\"" Mar 7 00:54:25.951438 containerd[1479]: time="2026-03-07T00:54:25.951355233Z" level=info msg="StartContainer for \"e6c0259338e3fcb3a9ca516a9986f068053b0299a45bffbc2148c44857c3c05b\"" Mar 7 00:54:25.958541 containerd[1479]: time="2026-03-07T00:54:25.957917823Z" level=info msg="StartContainer for \"4177c4f0371c2b52d5d4c31338ac620bc3b068bec9f3d4bcfc5a7720156d31aa\"" Mar 7 00:54:25.961163 systemd[1]: Started cri-containerd-e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466.scope - libcontainer container e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466. Mar 7 00:54:25.972718 containerd[1479]: time="2026-03-07T00:54:25.972434849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.972718 containerd[1479]: time="2026-03-07T00:54:25.972500249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.972718 containerd[1479]: time="2026-03-07T00:54:25.972515089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.972718 containerd[1479]: time="2026-03-07T00:54:25.972600010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.019118 systemd[1]: Started cri-containerd-e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc.scope - libcontainer container e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc. Mar 7 00:54:26.060565 systemd[1]: Started cri-containerd-e6c0259338e3fcb3a9ca516a9986f068053b0299a45bffbc2148c44857c3c05b.scope - libcontainer container e6c0259338e3fcb3a9ca516a9986f068053b0299a45bffbc2148c44857c3c05b. Mar 7 00:54:26.061332 containerd[1479]: time="2026-03-07T00:54:26.060771793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-pt8pw,Uid:0d2b6f3a-719f-4e2a-8ee9-96f451414fe3,Namespace:calico-system,Attempt:1,} returns sandbox id \"fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d\"" Mar 7 00:54:26.070346 systemd[1]: Started cri-containerd-4177c4f0371c2b52d5d4c31338ac620bc3b068bec9f3d4bcfc5a7720156d31aa.scope - libcontainer container 4177c4f0371c2b52d5d4c31338ac620bc3b068bec9f3d4bcfc5a7720156d31aa. Mar 7 00:54:26.143549 containerd[1479]: time="2026-03-07T00:54:26.143383264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75cc8bd959-lxvtj,Uid:f0b2393b-a5c9-465b-b0f4-23d9615ad922,Namespace:calico-system,Attempt:1,} returns sandbox id \"e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc\"" Mar 7 00:54:26.159992 containerd[1479]: time="2026-03-07T00:54:26.159191891Z" level=info msg="StartContainer for \"4177c4f0371c2b52d5d4c31338ac620bc3b068bec9f3d4bcfc5a7720156d31aa\" returns successfully" Mar 7 00:54:26.160261 containerd[1479]: time="2026-03-07T00:54:26.159966974Z" level=info msg="StartContainer for \"e6c0259338e3fcb3a9ca516a9986f068053b0299a45bffbc2148c44857c3c05b\" returns successfully" Mar 7 00:54:26.236850 containerd[1479]: time="2026-03-07T00:54:26.236251739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6777b486ff-5jxw7,Uid:34b5bba8-5d4e-4611-99bc-c577c12e794e,Namespace:calico-system,Attempt:1,} returns sandbox id \"e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466\"" Mar 7 00:54:26.260298 kubelet[2662]: I0307 00:54:26.260227 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-vwsss" podStartSLOduration=40.26021384 podStartE2EDuration="40.26021384s" podCreationTimestamp="2026-03-07 00:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:26.259147676 +0000 UTC m=+46.529888610" watchObservedRunningTime="2026-03-07 00:54:26.26021384 +0000 UTC m=+46.530954694" Mar 7 00:54:26.265852 kubelet[2662]: I0307 00:54:26.265150 2662 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:26.307580 systemd-networkd[1369]: cali715d8581b62: Gained IPv6LL Mar 7 00:54:26.343134 kubelet[2662]: I0307 00:54:26.343069 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-rvn46" podStartSLOduration=40.342958072 podStartE2EDuration="40.342958072s" podCreationTimestamp="2026-03-07 00:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:26.314003229 +0000 UTC m=+46.584744123" watchObservedRunningTime="2026-03-07 00:54:26.342958072 +0000 UTC m=+46.613698966" Mar 7 00:54:26.432251 systemd-networkd[1369]: calif6f88ce5542: Link UP Mar 7 00:54:26.432459 systemd-networkd[1369]: calif6f88ce5542: Gained carrier Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.090 [ERROR][4424] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.123 [INFO][4424] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0 whisker-75fb54fcd5- calico-system 6455c007-d992-42f8-aef7-b7ff752f7a18 920 0 2026-03-07 00:54:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75fb54fcd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-d5610c1cbf whisker-75fb54fcd5-hktf8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif6f88ce5542 [] [] }} ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.126 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.246 [INFO][4569] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" HandleID="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.292 [INFO][4569] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" HandleID="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-d5610c1cbf", "pod":"whisker-75fb54fcd5-hktf8", "timestamp":"2026-03-07 00:54:26.24605362 +0000 UTC"}, Hostname:"ci-4081-3-6-n-d5610c1cbf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400018eb00)} Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.292 [INFO][4569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.292 [INFO][4569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.292 [INFO][4569] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-d5610c1cbf' Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.321 [INFO][4569] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.330 [INFO][4569] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.344 [INFO][4569] ipam/ipam.go 526: Trying affinity for 192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.350 [INFO][4569] ipam/ipam.go 160: Attempting to load block cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.358 [INFO][4569] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.358 [INFO][4569] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.364 [INFO][4569] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.385 [INFO][4569] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.424 [INFO][4569] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.42.200/26] block=192.168.42.192/26 handle="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.424 [INFO][4569] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.42.200/26] handle="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" host="ci-4081-3-6-n-d5610c1cbf" Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.424 [INFO][4569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:26.477073 containerd[1479]: 2026-03-07 00:54:26.424 [INFO][4569] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.42.200/26] IPv6=[] ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" HandleID="k8s-pod-network.d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.477719 containerd[1479]: 2026-03-07 00:54:26.426 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0", GenerateName:"whisker-75fb54fcd5-", Namespace:"calico-system", SelfLink:"", UID:"6455c007-d992-42f8-aef7-b7ff752f7a18", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75fb54fcd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"", Pod:"whisker-75fb54fcd5-hktf8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif6f88ce5542", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:26.477719 containerd[1479]: 2026-03-07 00:54:26.427 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.42.200/32] ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.477719 containerd[1479]: 2026-03-07 00:54:26.428 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6f88ce5542 ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.477719 containerd[1479]: 2026-03-07 00:54:26.429 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.477719 containerd[1479]: 2026-03-07 00:54:26.431 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0", GenerateName:"whisker-75fb54fcd5-", Namespace:"calico-system", SelfLink:"", UID:"6455c007-d992-42f8-aef7-b7ff752f7a18", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75fb54fcd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b", Pod:"whisker-75fb54fcd5-hktf8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.42.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif6f88ce5542", MAC:"9e:1e:79:01:8c:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:26.477719 containerd[1479]: 2026-03-07 00:54:26.473 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b" Namespace="calico-system" Pod="whisker-75fb54fcd5-hktf8" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--75fb54fcd5--hktf8-eth0" Mar 7 00:54:26.498250 containerd[1479]: time="2026-03-07T00:54:26.498118732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:26.498250 containerd[1479]: time="2026-03-07T00:54:26.498193252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:26.498250 containerd[1479]: time="2026-03-07T00:54:26.498228612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.498976 containerd[1479]: time="2026-03-07T00:54:26.498335133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.522366 systemd[1]: Started cri-containerd-d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b.scope - libcontainer container d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b. Mar 7 00:54:26.605363 containerd[1479]: time="2026-03-07T00:54:26.605322308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75fb54fcd5-hktf8,Uid:6455c007-d992-42f8-aef7-b7ff752f7a18,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b\"" Mar 7 00:54:26.738957 kernel: calico-node[4163]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 00:54:26.755184 systemd-networkd[1369]: cali7b05a14c8e6: Gained IPv6LL Mar 7 00:54:26.757200 systemd-networkd[1369]: cali3d94602808e: Gained IPv6LL Mar 7 00:54:26.947053 systemd-networkd[1369]: cali92c16f77f46: Gained IPv6LL Mar 7 00:54:27.075982 systemd-networkd[1369]: calia17b01ae894: Gained IPv6LL Mar 7 00:54:27.239152 systemd-networkd[1369]: vxlan.calico: Link UP Mar 7 00:54:27.239159 systemd-networkd[1369]: vxlan.calico: Gained carrier Mar 7 00:54:27.395173 systemd-networkd[1369]: calia49e43bdc11: Gained IPv6LL Mar 7 00:54:27.844060 systemd-networkd[1369]: calif6f88ce5542: Gained IPv6LL Mar 7 00:54:28.046745 containerd[1479]: time="2026-03-07T00:54:28.046029343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.048926 containerd[1479]: time="2026-03-07T00:54:28.048193791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 00:54:28.049150 containerd[1479]: time="2026-03-07T00:54:28.049104515Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.052822 containerd[1479]: time="2026-03-07T00:54:28.052505368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.053367 containerd[1479]: time="2026-03-07T00:54:28.053334611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.506735205s" Mar 7 00:54:28.053427 containerd[1479]: time="2026-03-07T00:54:28.053369411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 00:54:28.055375 containerd[1479]: time="2026-03-07T00:54:28.055265738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 00:54:28.061650 containerd[1479]: time="2026-03-07T00:54:28.061526601Z" level=info msg="CreateContainer within sandbox \"d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 00:54:28.092454 containerd[1479]: time="2026-03-07T00:54:28.092328636Z" level=info msg="CreateContainer within sandbox \"d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3c9e65844514fb275d7ff6ad9b390e3443f7cafdd5c1e7f0834a43d5cda7abc1\"" Mar 7 00:54:28.094248 containerd[1479]: time="2026-03-07T00:54:28.094093083Z" level=info msg="StartContainer for \"3c9e65844514fb275d7ff6ad9b390e3443f7cafdd5c1e7f0834a43d5cda7abc1\"" Mar 7 00:54:28.130481 systemd[1]: Started cri-containerd-3c9e65844514fb275d7ff6ad9b390e3443f7cafdd5c1e7f0834a43d5cda7abc1.scope - libcontainer container 3c9e65844514fb275d7ff6ad9b390e3443f7cafdd5c1e7f0834a43d5cda7abc1. Mar 7 00:54:28.162044 containerd[1479]: time="2026-03-07T00:54:28.161924496Z" level=info msg="StartContainer for \"3c9e65844514fb275d7ff6ad9b390e3443f7cafdd5c1e7f0834a43d5cda7abc1\" returns successfully" Mar 7 00:54:28.548056 systemd-networkd[1369]: vxlan.calico: Gained IPv6LL Mar 7 00:54:30.484696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961919977.mount: Deactivated successfully. Mar 7 00:54:30.801113 containerd[1479]: time="2026-03-07T00:54:30.800943002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.803011 containerd[1479]: time="2026-03-07T00:54:30.802957409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 00:54:30.803900 containerd[1479]: time="2026-03-07T00:54:30.803806492Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.806657 containerd[1479]: time="2026-03-07T00:54:30.806605061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.808572 containerd[1479]: time="2026-03-07T00:54:30.807805785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.752429647s" Mar 7 00:54:30.808572 containerd[1479]: time="2026-03-07T00:54:30.807841745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 00:54:30.808915 containerd[1479]: time="2026-03-07T00:54:30.808841508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:54:30.815416 containerd[1479]: time="2026-03-07T00:54:30.815318569Z" level=info msg="CreateContainer within sandbox \"263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 00:54:30.834588 containerd[1479]: time="2026-03-07T00:54:30.834530832Z" level=info msg="CreateContainer within sandbox \"263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dc6a635a759eb6c31556378c56aade4ac4dad0c41e1e61d53b158e28a7dbd223\"" Mar 7 00:54:30.834914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588436311.mount: Deactivated successfully. Mar 7 00:54:30.837228 containerd[1479]: time="2026-03-07T00:54:30.835900037Z" level=info msg="StartContainer for \"dc6a635a759eb6c31556378c56aade4ac4dad0c41e1e61d53b158e28a7dbd223\"" Mar 7 00:54:30.872262 systemd[1]: Started cri-containerd-dc6a635a759eb6c31556378c56aade4ac4dad0c41e1e61d53b158e28a7dbd223.scope - libcontainer container dc6a635a759eb6c31556378c56aade4ac4dad0c41e1e61d53b158e28a7dbd223. Mar 7 00:54:30.910825 containerd[1479]: time="2026-03-07T00:54:30.910768443Z" level=info msg="StartContainer for \"dc6a635a759eb6c31556378c56aade4ac4dad0c41e1e61d53b158e28a7dbd223\" returns successfully" Mar 7 00:54:31.283366 systemd[1]: run-containerd-runc-k8s.io-dc6a635a759eb6c31556378c56aade4ac4dad0c41e1e61d53b158e28a7dbd223-runc.kUa5WF.mount: Deactivated successfully. Mar 7 00:54:31.285428 kubelet[2662]: I0307 00:54:31.285048 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-whjl7" podStartSLOduration=25.385814949 podStartE2EDuration="30.285026334s" podCreationTimestamp="2026-03-07 00:54:01 +0000 UTC" firstStartedPulling="2026-03-07 00:54:25.909481883 +0000 UTC m=+46.180222777" lastFinishedPulling="2026-03-07 00:54:30.808693268 +0000 UTC m=+51.079434162" observedRunningTime="2026-03-07 00:54:31.278341313 +0000 UTC m=+51.549082207" watchObservedRunningTime="2026-03-07 00:54:31.285026334 +0000 UTC m=+51.555767268" Mar 7 00:54:32.836920 containerd[1479]: time="2026-03-07T00:54:32.836805390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.838490 containerd[1479]: time="2026-03-07T00:54:32.838405395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 00:54:32.839745 containerd[1479]: time="2026-03-07T00:54:32.839395798Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.842320 containerd[1479]: time="2026-03-07T00:54:32.842020005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:32.843750 containerd[1479]: time="2026-03-07T00:54:32.842898208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.0339959s" Mar 7 00:54:32.843750 containerd[1479]: time="2026-03-07T00:54:32.842933008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:54:32.844139 containerd[1479]: time="2026-03-07T00:54:32.844106691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:54:32.850076 containerd[1479]: time="2026-03-07T00:54:32.849683467Z" level=info msg="CreateContainer within sandbox \"fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:54:32.871973 containerd[1479]: time="2026-03-07T00:54:32.871338930Z" level=info msg="CreateContainer within sandbox \"fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0050e2743b9351f785ef3adc29aac073e9cb013317d4725e8d73fb0fa930d409\"" Mar 7 00:54:32.874228 containerd[1479]: time="2026-03-07T00:54:32.874152138Z" level=info msg="StartContainer for \"0050e2743b9351f785ef3adc29aac073e9cb013317d4725e8d73fb0fa930d409\"" Mar 7 00:54:32.919132 systemd[1]: Started cri-containerd-0050e2743b9351f785ef3adc29aac073e9cb013317d4725e8d73fb0fa930d409.scope - libcontainer container 0050e2743b9351f785ef3adc29aac073e9cb013317d4725e8d73fb0fa930d409. Mar 7 00:54:32.958420 containerd[1479]: time="2026-03-07T00:54:32.958376381Z" level=info msg="StartContainer for \"0050e2743b9351f785ef3adc29aac073e9cb013317d4725e8d73fb0fa930d409\" returns successfully" Mar 7 00:54:33.232577 containerd[1479]: time="2026-03-07T00:54:33.231642328Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:33.233968 containerd[1479]: time="2026-03-07T00:54:33.233939974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 00:54:33.236059 containerd[1479]: time="2026-03-07T00:54:33.236026700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 391.878209ms" Mar 7 00:54:33.236179 containerd[1479]: time="2026-03-07T00:54:33.236162980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:54:33.237503 containerd[1479]: time="2026-03-07T00:54:33.237480864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 00:54:33.243655 containerd[1479]: time="2026-03-07T00:54:33.243596960Z" level=info msg="CreateContainer within sandbox \"e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:54:33.274104 containerd[1479]: time="2026-03-07T00:54:33.273756642Z" level=info msg="CreateContainer within sandbox \"e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"91d7e77ac25c4ae1159765f215097e45cf09aa7b66495cbad7c97eb560f4ab46\"" Mar 7 00:54:33.276664 containerd[1479]: time="2026-03-07T00:54:33.275211406Z" level=info msg="StartContainer for \"91d7e77ac25c4ae1159765f215097e45cf09aa7b66495cbad7c97eb560f4ab46\"" Mar 7 00:54:33.302904 kubelet[2662]: I0307 00:54:33.297746 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-75cc8bd959-pt8pw" podStartSLOduration=26.518262784 podStartE2EDuration="33.297733507s" podCreationTimestamp="2026-03-07 00:54:00 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.064473488 +0000 UTC m=+46.335214382" lastFinishedPulling="2026-03-07 00:54:32.843944251 +0000 UTC m=+53.114685105" observedRunningTime="2026-03-07 00:54:33.297357946 +0000 UTC m=+53.568098840" watchObservedRunningTime="2026-03-07 00:54:33.297733507 +0000 UTC m=+53.568474401" Mar 7 00:54:33.317106 systemd[1]: Started cri-containerd-91d7e77ac25c4ae1159765f215097e45cf09aa7b66495cbad7c97eb560f4ab46.scope - libcontainer container 91d7e77ac25c4ae1159765f215097e45cf09aa7b66495cbad7c97eb560f4ab46. Mar 7 00:54:33.375780 containerd[1479]: time="2026-03-07T00:54:33.375735158Z" level=info msg="StartContainer for \"91d7e77ac25c4ae1159765f215097e45cf09aa7b66495cbad7c97eb560f4ab46\" returns successfully" Mar 7 00:54:34.311741 kubelet[2662]: I0307 00:54:34.311656 2662 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:35.315467 kubelet[2662]: I0307 00:54:35.315438 2662 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:35.599212 containerd[1479]: time="2026-03-07T00:54:35.599016329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:35.601560 containerd[1479]: time="2026-03-07T00:54:35.601470855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 00:54:35.602559 containerd[1479]: time="2026-03-07T00:54:35.602518137Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:35.606229 containerd[1479]: time="2026-03-07T00:54:35.606170026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:35.607904 containerd[1479]: time="2026-03-07T00:54:35.606717427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.367781159s" Mar 7 00:54:35.607904 containerd[1479]: time="2026-03-07T00:54:35.606749507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 00:54:35.611800 containerd[1479]: time="2026-03-07T00:54:35.611618399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 00:54:35.630916 containerd[1479]: time="2026-03-07T00:54:35.630664324Z" level=info msg="CreateContainer within sandbox \"e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 00:54:35.646637 containerd[1479]: time="2026-03-07T00:54:35.646581442Z" level=info msg="CreateContainer within sandbox \"e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"377398f3c84944855bff6716f8b8301014c9cc6fc8e6ce8ca8fb2eb80c542468\"" Mar 7 00:54:35.649328 containerd[1479]: time="2026-03-07T00:54:35.648521286Z" level=info msg="StartContainer for \"377398f3c84944855bff6716f8b8301014c9cc6fc8e6ce8ca8fb2eb80c542468\"" Mar 7 00:54:35.709063 systemd[1]: Started cri-containerd-377398f3c84944855bff6716f8b8301014c9cc6fc8e6ce8ca8fb2eb80c542468.scope - libcontainer container 377398f3c84944855bff6716f8b8301014c9cc6fc8e6ce8ca8fb2eb80c542468. Mar 7 00:54:35.744698 containerd[1479]: time="2026-03-07T00:54:35.744565035Z" level=info msg="StartContainer for \"377398f3c84944855bff6716f8b8301014c9cc6fc8e6ce8ca8fb2eb80c542468\" returns successfully" Mar 7 00:54:36.345621 kubelet[2662]: I0307 00:54:36.344473 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6777b486ff-5jxw7" podStartSLOduration=23.977315303 podStartE2EDuration="33.344454291s" podCreationTimestamp="2026-03-07 00:54:03 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.243272728 +0000 UTC m=+46.514013582" lastFinishedPulling="2026-03-07 00:54:35.610411676 +0000 UTC m=+55.881152570" observedRunningTime="2026-03-07 00:54:36.343549728 +0000 UTC m=+56.614290622" watchObservedRunningTime="2026-03-07 00:54:36.344454291 +0000 UTC m=+56.615195145" Mar 7 00:54:36.345621 kubelet[2662]: I0307 00:54:36.344624 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-75cc8bd959-lxvtj" podStartSLOduration=29.254658748 podStartE2EDuration="36.344620171s" podCreationTimestamp="2026-03-07 00:54:00 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.14723052 +0000 UTC m=+46.417971414" lastFinishedPulling="2026-03-07 00:54:33.237191943 +0000 UTC m=+53.507932837" observedRunningTime="2026-03-07 00:54:34.327737159 +0000 UTC m=+54.598478093" watchObservedRunningTime="2026-03-07 00:54:36.344620171 +0000 UTC m=+56.615361065" Mar 7 00:54:36.987589 containerd[1479]: time="2026-03-07T00:54:36.986987443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:36.990133 containerd[1479]: time="2026-03-07T00:54:36.990074370Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:36.990697 containerd[1479]: time="2026-03-07T00:54:36.990670731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 00:54:36.994017 containerd[1479]: time="2026-03-07T00:54:36.993983299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:36.994964 containerd[1479]: time="2026-03-07T00:54:36.994934141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.383284662s" Mar 7 00:54:36.995073 containerd[1479]: time="2026-03-07T00:54:36.995057941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 00:54:37.005148 containerd[1479]: time="2026-03-07T00:54:37.004846842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 00:54:37.012677 containerd[1479]: time="2026-03-07T00:54:37.012621739Z" level=info msg="CreateContainer within sandbox \"d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 00:54:37.038986 containerd[1479]: time="2026-03-07T00:54:37.038808913Z" level=info msg="CreateContainer within sandbox \"d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ffd74528cf980e2879d4497bd55073e554806cf6bab8aefc922457221dcafd21\"" Mar 7 00:54:37.043343 containerd[1479]: time="2026-03-07T00:54:37.042769322Z" level=info msg="StartContainer for \"ffd74528cf980e2879d4497bd55073e554806cf6bab8aefc922457221dcafd21\"" Mar 7 00:54:37.100115 systemd[1]: Started cri-containerd-ffd74528cf980e2879d4497bd55073e554806cf6bab8aefc922457221dcafd21.scope - libcontainer container ffd74528cf980e2879d4497bd55073e554806cf6bab8aefc922457221dcafd21. Mar 7 00:54:37.143850 containerd[1479]: time="2026-03-07T00:54:37.143792173Z" level=info msg="StartContainer for \"ffd74528cf980e2879d4497bd55073e554806cf6bab8aefc922457221dcafd21\" returns successfully" Mar 7 00:54:38.933147 containerd[1479]: time="2026-03-07T00:54:38.933077151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.934810 containerd[1479]: time="2026-03-07T00:54:38.934752235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 00:54:38.936010 containerd[1479]: time="2026-03-07T00:54:38.935956917Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.940897 containerd[1479]: time="2026-03-07T00:54:38.940776046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.942110 containerd[1479]: time="2026-03-07T00:54:38.941630008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.936716606s" Mar 7 00:54:38.942110 containerd[1479]: time="2026-03-07T00:54:38.941666568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 00:54:38.944030 containerd[1479]: time="2026-03-07T00:54:38.943996093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 00:54:38.948151 containerd[1479]: time="2026-03-07T00:54:38.948107421Z" level=info msg="CreateContainer within sandbox \"d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 00:54:38.967570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456554056.mount: Deactivated successfully. Mar 7 00:54:38.977784 containerd[1479]: time="2026-03-07T00:54:38.977552318Z" level=info msg="CreateContainer within sandbox \"d7e8c6727ab66e9ed115ef6eb2458a05603f2f561f02059ae84203f2485a0ef1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"27289544eb297e3f25b43e63a1a724f0a2147e4d9c0727b994eb8f88fcdac991\"" Mar 7 00:54:38.978924 containerd[1479]: time="2026-03-07T00:54:38.978281600Z" level=info msg="StartContainer for \"27289544eb297e3f25b43e63a1a724f0a2147e4d9c0727b994eb8f88fcdac991\"" Mar 7 00:54:39.009254 systemd[1]: run-containerd-runc-k8s.io-27289544eb297e3f25b43e63a1a724f0a2147e4d9c0727b994eb8f88fcdac991-runc.eMrvn8.mount: Deactivated successfully. Mar 7 00:54:39.020175 systemd[1]: Started cri-containerd-27289544eb297e3f25b43e63a1a724f0a2147e4d9c0727b994eb8f88fcdac991.scope - libcontainer container 27289544eb297e3f25b43e63a1a724f0a2147e4d9c0727b994eb8f88fcdac991. Mar 7 00:54:39.048616 containerd[1479]: time="2026-03-07T00:54:39.048535532Z" level=info msg="StartContainer for \"27289544eb297e3f25b43e63a1a724f0a2147e4d9c0727b994eb8f88fcdac991\" returns successfully" Mar 7 00:54:39.380217 kubelet[2662]: I0307 00:54:39.380052 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-76chp" podStartSLOduration=21.979798197 podStartE2EDuration="36.380037901s" podCreationTimestamp="2026-03-07 00:54:03 +0000 UTC" firstStartedPulling="2026-03-07 00:54:24.542664147 +0000 UTC m=+44.813405041" lastFinishedPulling="2026-03-07 00:54:38.942903851 +0000 UTC m=+59.213644745" observedRunningTime="2026-03-07 00:54:39.37979702 +0000 UTC m=+59.650537914" watchObservedRunningTime="2026-03-07 00:54:39.380037901 +0000 UTC m=+59.650778795" Mar 7 00:54:39.873293 containerd[1479]: time="2026-03-07T00:54:39.872971966Z" level=info msg="StopPodSandbox for \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\"" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.916 [WARNING][5235] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"f0b2393b-a5c9-465b-b0f4-23d9615ad922", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc", Pod:"calico-apiserver-75cc8bd959-lxvtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali92c16f77f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.917 [INFO][5235] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.917 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" iface="eth0" netns="" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.917 [INFO][5235] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.917 [INFO][5235] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.947 [INFO][5242] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.949 [INFO][5242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.949 [INFO][5242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.963 [WARNING][5242] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.963 [INFO][5242] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.966 [INFO][5242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:39.973412 containerd[1479]: 2026-03-07 00:54:39.970 [INFO][5235] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:39.974379 containerd[1479]: time="2026-03-07T00:54:39.973948112Z" level=info msg="TearDown network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\" successfully" Mar 7 00:54:39.974379 containerd[1479]: time="2026-03-07T00:54:39.973975152Z" level=info msg="StopPodSandbox for \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\" returns successfully" Mar 7 00:54:39.976007 containerd[1479]: time="2026-03-07T00:54:39.975976276Z" level=info msg="RemovePodSandbox for \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\"" Mar 7 00:54:39.979009 containerd[1479]: time="2026-03-07T00:54:39.978967281Z" level=info msg="Forcibly stopping sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\"" Mar 7 00:54:39.981338 kubelet[2662]: I0307 00:54:39.981254 2662 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 00:54:39.981338 kubelet[2662]: I0307 00:54:39.981295 2662 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.036 [WARNING][5256] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"f0b2393b-a5c9-465b-b0f4-23d9615ad922", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"e22694be4d8e269ed5fd79213b865b4e7a00083ce2fd977fd6745b5cc03b2fdc", Pod:"calico-apiserver-75cc8bd959-lxvtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali92c16f77f46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.040 [INFO][5256] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.040 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" iface="eth0" netns="" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.040 [INFO][5256] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.041 [INFO][5256] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.066 [INFO][5263] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.067 [INFO][5263] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.067 [INFO][5263] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.082 [WARNING][5263] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.082 [INFO][5263] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" HandleID="k8s-pod-network.1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--lxvtj-eth0" Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.084 [INFO][5263] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.090012 containerd[1479]: 2026-03-07 00:54:40.086 [INFO][5256] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc" Mar 7 00:54:40.090733 containerd[1479]: time="2026-03-07T00:54:40.090054475Z" level=info msg="TearDown network for sandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\" successfully" Mar 7 00:54:40.097037 containerd[1479]: time="2026-03-07T00:54:40.096992327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:40.097170 containerd[1479]: time="2026-03-07T00:54:40.097071327Z" level=info msg="RemovePodSandbox \"1a2a9c8c0a0ba1b690c195a48fd81bbd3c622e9229ea17151e529686f004d9fc\" returns successfully" Mar 7 00:54:40.097910 containerd[1479]: time="2026-03-07T00:54:40.097640448Z" level=info msg="StopPodSandbox for \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\"" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.152 [WARNING][5279] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"0d2b6f3a-719f-4e2a-8ee9-96f451414fe3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d", Pod:"calico-apiserver-75cc8bd959-pt8pw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia49e43bdc11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.152 [INFO][5279] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.152 [INFO][5279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" iface="eth0" netns="" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.152 [INFO][5279] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.152 [INFO][5279] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.179 [INFO][5286] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.179 [INFO][5286] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.179 [INFO][5286] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.189 [WARNING][5286] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.189 [INFO][5286] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.191 [INFO][5286] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.197254 containerd[1479]: 2026-03-07 00:54:40.195 [INFO][5279] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.198001 containerd[1479]: time="2026-03-07T00:54:40.197728820Z" level=info msg="TearDown network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\" successfully" Mar 7 00:54:40.198001 containerd[1479]: time="2026-03-07T00:54:40.197758900Z" level=info msg="StopPodSandbox for \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\" returns successfully" Mar 7 00:54:40.198427 containerd[1479]: time="2026-03-07T00:54:40.198342421Z" level=info msg="RemovePodSandbox for \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\"" Mar 7 00:54:40.198427 containerd[1479]: time="2026-03-07T00:54:40.198390942Z" level=info msg="Forcibly stopping sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\"" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.241 [WARNING][5300] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0", GenerateName:"calico-apiserver-75cc8bd959-", Namespace:"calico-system", SelfLink:"", UID:"0d2b6f3a-719f-4e2a-8ee9-96f451414fe3", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75cc8bd959", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"fedc5d936a8be4284a479a5721fd763b0f3bd5c59199a6451f61063f8d2f961d", Pod:"calico-apiserver-75cc8bd959-pt8pw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia49e43bdc11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.241 [INFO][5300] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.241 [INFO][5300] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" iface="eth0" netns="" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.241 [INFO][5300] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.242 [INFO][5300] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.268 [INFO][5307] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.268 [INFO][5307] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.269 [INFO][5307] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.282 [WARNING][5307] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.282 [INFO][5307] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" HandleID="k8s-pod-network.b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--apiserver--75cc8bd959--pt8pw-eth0" Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.284 [INFO][5307] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.288156 containerd[1479]: 2026-03-07 00:54:40.286 [INFO][5300] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6" Mar 7 00:54:40.288654 containerd[1479]: time="2026-03-07T00:54:40.288198856Z" level=info msg="TearDown network for sandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\" successfully" Mar 7 00:54:40.292518 containerd[1479]: time="2026-03-07T00:54:40.292477704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:40.292712 containerd[1479]: time="2026-03-07T00:54:40.292555224Z" level=info msg="RemovePodSandbox \"b9571f013656ee27cb5f34c7dc6999b03cd3ccd7404f014badab6f6b2d94eba6\" returns successfully" Mar 7 00:54:40.293640 containerd[1479]: time="2026-03-07T00:54:40.293358585Z" level=info msg="StopPodSandbox for \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\"" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.337 [WARNING][5321] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b0d81311-37f7-4001-8962-252a531602c3", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40", Pod:"coredns-7d764666f9-vwsss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d94602808e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.338 [INFO][5321] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.338 [INFO][5321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" iface="eth0" netns="" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.338 [INFO][5321] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.338 [INFO][5321] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.363 [INFO][5328] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.363 [INFO][5328] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.363 [INFO][5328] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.378 [WARNING][5328] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.378 [INFO][5328] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.380 [INFO][5328] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.384017 containerd[1479]: 2026-03-07 00:54:40.382 [INFO][5321] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.385361 containerd[1479]: time="2026-03-07T00:54:40.384042061Z" level=info msg="TearDown network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\" successfully" Mar 7 00:54:40.385361 containerd[1479]: time="2026-03-07T00:54:40.384067181Z" level=info msg="StopPodSandbox for \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\" returns successfully" Mar 7 00:54:40.385361 containerd[1479]: time="2026-03-07T00:54:40.384608542Z" level=info msg="RemovePodSandbox for \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\"" Mar 7 00:54:40.385361 containerd[1479]: time="2026-03-07T00:54:40.384641142Z" level=info msg="Forcibly stopping sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\"" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.423 [WARNING][5342] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"b0d81311-37f7-4001-8962-252a531602c3", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"cdc58ab119d0b2666f9fc0ba3b2407e012fc8d03ed570c2b5e121471ffbcfe40", Pod:"coredns-7d764666f9-vwsss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3d94602808e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.424 [INFO][5342] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.424 [INFO][5342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" iface="eth0" netns="" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.424 [INFO][5342] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.424 [INFO][5342] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.449 [INFO][5349] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.449 [INFO][5349] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.449 [INFO][5349] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.460 [WARNING][5349] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.460 [INFO][5349] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" HandleID="k8s-pod-network.cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--vwsss-eth0" Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.461 [INFO][5349] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.466333 containerd[1479]: 2026-03-07 00:54:40.463 [INFO][5342] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0" Mar 7 00:54:40.466333 containerd[1479]: time="2026-03-07T00:54:40.465557842Z" level=info msg="TearDown network for sandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\" successfully" Mar 7 00:54:40.482633 containerd[1479]: time="2026-03-07T00:54:40.482514391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:40.483016 containerd[1479]: time="2026-03-07T00:54:40.482974792Z" level=info msg="RemovePodSandbox \"cc1cb765e28b3c011a76fbe594d8a75ac8a5495bcc88d3c9e1749b42a2a2d0c0\" returns successfully" Mar 7 00:54:40.483923 containerd[1479]: time="2026-03-07T00:54:40.483821913Z" level=info msg="StopPodSandbox for \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\"" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.525 [WARNING][5363] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d6b126d7-c5ec-4b1a-85ff-31aabc7601d3", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9", Pod:"coredns-7d764666f9-rvn46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d8581b62", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.525 [INFO][5363] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.525 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" iface="eth0" netns="" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.525 [INFO][5363] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.525 [INFO][5363] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.547 [INFO][5370] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.547 [INFO][5370] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.548 [INFO][5370] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.558 [WARNING][5370] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.558 [INFO][5370] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.560 [INFO][5370] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.564613 containerd[1479]: 2026-03-07 00:54:40.562 [INFO][5363] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.566793 containerd[1479]: time="2026-03-07T00:54:40.564629412Z" level=info msg="TearDown network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\" successfully" Mar 7 00:54:40.566793 containerd[1479]: time="2026-03-07T00:54:40.564657732Z" level=info msg="StopPodSandbox for \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\" returns successfully" Mar 7 00:54:40.566793 containerd[1479]: time="2026-03-07T00:54:40.565405534Z" level=info msg="RemovePodSandbox for \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\"" Mar 7 00:54:40.566793 containerd[1479]: time="2026-03-07T00:54:40.565433014Z" level=info msg="Forcibly stopping sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\"" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.612 [WARNING][5384] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d6b126d7-c5ec-4b1a-85ff-31aabc7601d3", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"422819c69508a5da9f043dd3f386945c97df996a6952edaa60e3b2279554b1e9", Pod:"coredns-7d764666f9-rvn46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali715d8581b62", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.613 [INFO][5384] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.613 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" iface="eth0" netns="" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.613 [INFO][5384] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.613 [INFO][5384] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.634 [INFO][5391] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.634 [INFO][5391] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.634 [INFO][5391] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.648 [WARNING][5391] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.648 [INFO][5391] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" HandleID="k8s-pod-network.92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-coredns--7d764666f9--rvn46-eth0" Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.650 [INFO][5391] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.654900 containerd[1479]: 2026-03-07 00:54:40.652 [INFO][5384] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3" Mar 7 00:54:40.654900 containerd[1479]: time="2026-03-07T00:54:40.654366687Z" level=info msg="TearDown network for sandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\" successfully" Mar 7 00:54:40.660923 containerd[1479]: time="2026-03-07T00:54:40.660724738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:40.660923 containerd[1479]: time="2026-03-07T00:54:40.660817098Z" level=info msg="RemovePodSandbox \"92b2eb1acec2d23d00e57f73a8a569534980d8947292b72c9a8210d55cc7b0f3\" returns successfully" Mar 7 00:54:40.661562 containerd[1479]: time="2026-03-07T00:54:40.661528779Z" level=info msg="StopPodSandbox for \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\"" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.700 [WARNING][5406] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f6fc484b-21d3-4f5d-b25e-f88bbf89a117", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542", Pod:"goldmane-9f7667bb8-whjl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b05a14c8e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.701 [INFO][5406] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.701 [INFO][5406] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" iface="eth0" netns="" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.701 [INFO][5406] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.701 [INFO][5406] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.722 [INFO][5413] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.723 [INFO][5413] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.723 [INFO][5413] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.734 [WARNING][5413] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.735 [INFO][5413] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.737 [INFO][5413] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.742440 containerd[1479]: 2026-03-07 00:54:40.740 [INFO][5406] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.743709 containerd[1479]: time="2026-03-07T00:54:40.742987159Z" level=info msg="TearDown network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\" successfully" Mar 7 00:54:40.743709 containerd[1479]: time="2026-03-07T00:54:40.743029680Z" level=info msg="StopPodSandbox for \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\" returns successfully" Mar 7 00:54:40.746097 containerd[1479]: time="2026-03-07T00:54:40.745768564Z" level=info msg="RemovePodSandbox for \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\"" Mar 7 00:54:40.746097 containerd[1479]: time="2026-03-07T00:54:40.745826004Z" level=info msg="Forcibly stopping sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\"" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.791 [WARNING][5428] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"f6fc484b-21d3-4f5d-b25e-f88bbf89a117", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"263b9bb8d796d7e6e2891e7f1c61711297f6e47697624d7fdad88f73746f4542", Pod:"goldmane-9f7667bb8-whjl7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7b05a14c8e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.791 [INFO][5428] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.791 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" iface="eth0" netns="" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.791 [INFO][5428] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.791 [INFO][5428] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.814 [INFO][5435] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.814 [INFO][5435] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.814 [INFO][5435] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.831 [WARNING][5435] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.831 [INFO][5435] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" HandleID="k8s-pod-network.33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-goldmane--9f7667bb8--whjl7-eth0" Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.834 [INFO][5435] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.839951 containerd[1479]: 2026-03-07 00:54:40.836 [INFO][5428] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f" Mar 7 00:54:40.839951 containerd[1479]: time="2026-03-07T00:54:40.839721006Z" level=info msg="TearDown network for sandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\" successfully" Mar 7 00:54:40.844333 containerd[1479]: time="2026-03-07T00:54:40.844284734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:40.844459 containerd[1479]: time="2026-03-07T00:54:40.844389814Z" level=info msg="RemovePodSandbox \"33459fe25dfb3fa5789938b3d9f83816c9ea4d4abf9c83c966cda0fd88e7d32f\" returns successfully" Mar 7 00:54:40.845096 containerd[1479]: time="2026-03-07T00:54:40.845005975Z" level=info msg="StopPodSandbox for \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\"" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.886 [WARNING][5449] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0", GenerateName:"calico-kube-controllers-6777b486ff-", Namespace:"calico-system", SelfLink:"", UID:"34b5bba8-5d4e-4611-99bc-c577c12e794e", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777b486ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466", Pod:"calico-kube-controllers-6777b486ff-5jxw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17b01ae894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.887 [INFO][5449] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.887 [INFO][5449] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" iface="eth0" netns="" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.887 [INFO][5449] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.887 [INFO][5449] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.910 [INFO][5456] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.910 [INFO][5456] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.910 [INFO][5456] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.921 [WARNING][5456] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.922 [INFO][5456] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.926 [INFO][5456] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:40.931890 containerd[1479]: 2026-03-07 00:54:40.929 [INFO][5449] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:40.931890 containerd[1479]: time="2026-03-07T00:54:40.931825605Z" level=info msg="TearDown network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\" successfully" Mar 7 00:54:40.931890 containerd[1479]: time="2026-03-07T00:54:40.931855165Z" level=info msg="StopPodSandbox for \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\" returns successfully" Mar 7 00:54:40.933195 containerd[1479]: time="2026-03-07T00:54:40.932532046Z" level=info msg="RemovePodSandbox for \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\"" Mar 7 00:54:40.933195 containerd[1479]: time="2026-03-07T00:54:40.932564606Z" level=info msg="Forcibly stopping sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\"" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:40.988 [WARNING][5470] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0", GenerateName:"calico-kube-controllers-6777b486ff-", Namespace:"calico-system", SelfLink:"", UID:"34b5bba8-5d4e-4611-99bc-c577c12e794e", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6777b486ff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-d5610c1cbf", ContainerID:"e27e666c2d0e0fbb920183c30dbe730558eb2963d9b46e2d3e51f6d6e85af466", Pod:"calico-kube-controllers-6777b486ff-5jxw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia17b01ae894", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:40.989 [INFO][5470] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:40.989 [INFO][5470] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" iface="eth0" netns="" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:40.989 [INFO][5470] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:40.989 [INFO][5470] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.020 [INFO][5477] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.020 [INFO][5477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.021 [INFO][5477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.032 [WARNING][5477] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.032 [INFO][5477] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" HandleID="k8s-pod-network.c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-calico--kube--controllers--6777b486ff--5jxw7-eth0" Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.034 [INFO][5477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:41.038520 containerd[1479]: 2026-03-07 00:54:41.036 [INFO][5470] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3" Mar 7 00:54:41.041931 containerd[1479]: time="2026-03-07T00:54:41.040688468Z" level=info msg="TearDown network for sandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\" successfully" Mar 7 00:54:41.045142 containerd[1479]: time="2026-03-07T00:54:41.045105715Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:41.045278 containerd[1479]: time="2026-03-07T00:54:41.045176755Z" level=info msg="RemovePodSandbox \"c732e68c20ad6b6d0597884d72448b1fc3478f956b605a30500a21208d33e1d3\" returns successfully" Mar 7 00:54:41.046175 containerd[1479]: time="2026-03-07T00:54:41.045734356Z" level=info msg="StopPodSandbox for \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\"" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.088 [WARNING][5491] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.088 [INFO][5491] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.088 [INFO][5491] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" iface="eth0" netns="" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.088 [INFO][5491] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.088 [INFO][5491] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.118 [INFO][5498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.118 [INFO][5498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.118 [INFO][5498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.131 [WARNING][5498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.131 [INFO][5498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.134 [INFO][5498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:41.138737 containerd[1479]: 2026-03-07 00:54:41.136 [INFO][5491] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.139569 containerd[1479]: time="2026-03-07T00:54:41.139405227Z" level=info msg="TearDown network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\" successfully" Mar 7 00:54:41.139569 containerd[1479]: time="2026-03-07T00:54:41.139441227Z" level=info msg="StopPodSandbox for \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\" returns successfully" Mar 7 00:54:41.140159 containerd[1479]: time="2026-03-07T00:54:41.140034308Z" level=info msg="RemovePodSandbox for \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\"" Mar 7 00:54:41.140159 containerd[1479]: time="2026-03-07T00:54:41.140149629Z" level=info msg="Forcibly stopping sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\"" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.182 [WARNING][5512] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" WorkloadEndpoint="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.182 [INFO][5512] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.182 [INFO][5512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" iface="eth0" netns="" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.182 [INFO][5512] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.182 [INFO][5512] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.204 [INFO][5520] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.204 [INFO][5520] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.204 [INFO][5520] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.216 [WARNING][5520] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.216 [INFO][5520] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" HandleID="k8s-pod-network.acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Workload="ci--4081--3--6--n--d5610c1cbf-k8s-whisker--95c58bff6--2zzx6-eth0" Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.219 [INFO][5520] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:41.225556 containerd[1479]: 2026-03-07 00:54:41.222 [INFO][5512] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5" Mar 7 00:54:41.228655 containerd[1479]: time="2026-03-07T00:54:41.226018367Z" level=info msg="TearDown network for sandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\" successfully" Mar 7 00:54:41.233373 containerd[1479]: time="2026-03-07T00:54:41.233333459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:41.233907 containerd[1479]: time="2026-03-07T00:54:41.233523939Z" level=info msg="RemovePodSandbox \"acf9c4e4c4abb04d0b66f2d17b12594295768970519df53dd2543905428b00b5\" returns successfully" Mar 7 00:54:44.320947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4049529664.mount: Deactivated successfully. Mar 7 00:54:44.339729 containerd[1479]: time="2026-03-07T00:54:44.338249400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:44.340385 containerd[1479]: time="2026-03-07T00:54:44.340263523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 00:54:44.341403 containerd[1479]: time="2026-03-07T00:54:44.341182924Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:44.344851 containerd[1479]: time="2026-03-07T00:54:44.344799929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:44.345971 containerd[1479]: time="2026-03-07T00:54:44.345932810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 5.401898317s" Mar 7 00:54:44.346070 containerd[1479]: time="2026-03-07T00:54:44.345973050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 00:54:44.352342 containerd[1479]: time="2026-03-07T00:54:44.352299619Z" level=info msg="CreateContainer within sandbox \"d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 00:54:44.368626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12498235.mount: Deactivated successfully. Mar 7 00:54:44.371105 containerd[1479]: time="2026-03-07T00:54:44.371049164Z" level=info msg="CreateContainer within sandbox \"d5533b08add5f2fcb04bbc221af540ddaa0773cdd17f9829b067ab2fa8933b2b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7bc9dec4a3781213c549400f66885198f500db5a19fe0b5eea7273f1e8222d2d\"" Mar 7 00:54:44.371934 containerd[1479]: time="2026-03-07T00:54:44.371749205Z" level=info msg="StartContainer for \"7bc9dec4a3781213c549400f66885198f500db5a19fe0b5eea7273f1e8222d2d\"" Mar 7 00:54:44.408107 systemd[1]: Started cri-containerd-7bc9dec4a3781213c549400f66885198f500db5a19fe0b5eea7273f1e8222d2d.scope - libcontainer container 7bc9dec4a3781213c549400f66885198f500db5a19fe0b5eea7273f1e8222d2d. Mar 7 00:54:44.448328 containerd[1479]: time="2026-03-07T00:54:44.448184186Z" level=info msg="StartContainer for \"7bc9dec4a3781213c549400f66885198f500db5a19fe0b5eea7273f1e8222d2d\" returns successfully" Mar 7 00:54:45.419319 kubelet[2662]: I0307 00:54:45.419168 2662 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-75fb54fcd5-hktf8" podStartSLOduration=2.679317101 podStartE2EDuration="20.419152516s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.607591117 +0000 UTC m=+46.878331971" lastFinishedPulling="2026-03-07 00:54:44.347426452 +0000 UTC m=+64.618167386" observedRunningTime="2026-03-07 00:54:45.416141037 +0000 UTC m=+65.686881931" watchObservedRunningTime="2026-03-07 00:54:45.419152516 +0000 UTC m=+65.689893410" Mar 7 00:54:49.459063 kubelet[2662]: I0307 00:54:49.458364 2662 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:55.668937 kubelet[2662]: I0307 00:54:55.668808 2662 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:55:44.406412 update_engine[1461]: I20260307 00:55:44.406221 1461 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 00:55:44.406412 update_engine[1461]: I20260307 00:55:44.406297 1461 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 00:55:44.408088 update_engine[1461]: I20260307 00:55:44.408018 1461 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 00:55:44.411190 update_engine[1461]: I20260307 00:55:44.411130 1461 omaha_request_params.cc:62] Current group set to lts Mar 7 00:55:44.411331 update_engine[1461]: I20260307 00:55:44.411284 1461 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 00:55:44.411331 update_engine[1461]: I20260307 00:55:44.411304 1461 update_attempter.cc:643] Scheduling an action processor start. Mar 7 00:55:44.411453 update_engine[1461]: I20260307 00:55:44.411330 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 00:55:44.412937 update_engine[1461]: I20260307 00:55:44.412853 1461 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 00:55:44.413022 update_engine[1461]: I20260307 00:55:44.412987 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 00:55:44.413022 update_engine[1461]: I20260307 00:55:44.412998 1461 omaha_request_action.cc:272] Request: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: Mar 7 00:55:44.413022 update_engine[1461]: I20260307 00:55:44.413005 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:55:44.417932 update_engine[1461]: I20260307 00:55:44.417129 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:55:44.417932 update_engine[1461]: I20260307 00:55:44.417497 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:55:44.420230 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 00:55:44.421656 update_engine[1461]: E20260307 00:55:44.420418 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:55:44.421656 update_engine[1461]: I20260307 00:55:44.421617 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 00:55:54.355966 update_engine[1461]: I20260307 00:55:54.355810 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:55:54.356795 update_engine[1461]: I20260307 00:55:54.356117 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:55:54.356795 update_engine[1461]: I20260307 00:55:54.356374 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:55:54.357968 update_engine[1461]: E20260307 00:55:54.357925 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:55:54.358053 update_engine[1461]: I20260307 00:55:54.357998 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 00:56:04.357097 update_engine[1461]: I20260307 00:56:04.356575 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:04.357097 update_engine[1461]: I20260307 00:56:04.357090 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:04.357608 update_engine[1461]: I20260307 00:56:04.357417 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:04.358539 update_engine[1461]: E20260307 00:56:04.358484 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:04.358609 update_engine[1461]: I20260307 00:56:04.358548 1461 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 00:56:11.647257 systemd[1]: Started sshd@21-88.99.14.23:22-20.161.92.111:56028.service - OpenSSH per-connection server daemon (20.161.92.111:56028). Mar 7 00:56:12.240896 sshd[5911]: Accepted publickey for core from 20.161.92.111 port 56028 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:12.245743 sshd[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:12.252027 systemd-logind[1459]: New session 8 of user core. Mar 7 00:56:12.255143 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:56:12.750746 sshd[5911]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:12.755122 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:56:12.755454 systemd[1]: sshd@21-88.99.14.23:22-20.161.92.111:56028.service: Deactivated successfully. Mar 7 00:56:12.758267 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:56:12.760997 systemd-logind[1459]: Removed session 8. Mar 7 00:56:14.349010 update_engine[1461]: I20260307 00:56:14.348354 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:14.349010 update_engine[1461]: I20260307 00:56:14.348623 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:14.349010 update_engine[1461]: I20260307 00:56:14.348865 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:14.350063 update_engine[1461]: E20260307 00:56:14.350031 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:14.350282 update_engine[1461]: I20260307 00:56:14.350256 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 00:56:14.350356 update_engine[1461]: I20260307 00:56:14.350339 1461 omaha_request_action.cc:617] Omaha request response: Mar 7 00:56:14.350754 update_engine[1461]: E20260307 00:56:14.350476 1461 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 00:56:14.350754 update_engine[1461]: I20260307 00:56:14.350499 1461 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 00:56:14.350754 update_engine[1461]: I20260307 00:56:14.350507 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:56:14.350754 update_engine[1461]: I20260307 00:56:14.350513 1461 update_attempter.cc:306] Processing Done. Mar 7 00:56:14.350754 update_engine[1461]: E20260307 00:56:14.350539 1461 update_attempter.cc:619] Update failed. Mar 7 00:56:14.350754 update_engine[1461]: I20260307 00:56:14.350549 1461 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 00:56:14.350754 update_engine[1461]: I20260307 00:56:14.350555 1461 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 00:56:14.350754 update_engine[1461]: I20260307 00:56:14.350562 1461 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 00:56:14.351599 update_engine[1461]: I20260307 00:56:14.351115 1461 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 00:56:14.351599 update_engine[1461]: I20260307 00:56:14.351166 1461 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 00:56:14.351599 update_engine[1461]: I20260307 00:56:14.351173 1461 omaha_request_action.cc:272] Request: Mar 7 00:56:14.351599 update_engine[1461]: Mar 7 00:56:14.351599 update_engine[1461]: Mar 7 00:56:14.351599 update_engine[1461]: Mar 7 00:56:14.351599 update_engine[1461]: Mar 7 00:56:14.351599 update_engine[1461]: Mar 7 00:56:14.351599 update_engine[1461]: Mar 7 00:56:14.351599 update_engine[1461]: I20260307 00:56:14.351181 1461 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:56:14.351599 update_engine[1461]: I20260307 00:56:14.351341 1461 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:56:14.351599 update_engine[1461]: I20260307 00:56:14.351533 1461 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:56:14.352058 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 00:56:14.352986 update_engine[1461]: E20260307 00:56:14.352741 1461 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352785 1461 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352793 1461 omaha_request_action.cc:617] Omaha request response: Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352799 1461 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352803 1461 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352809 1461 update_attempter.cc:306] Processing Done. Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352814 1461 update_attempter.cc:310] Error event sent. Mar 7 00:56:14.352986 update_engine[1461]: I20260307 00:56:14.352823 1461 update_check_scheduler.cc:74] Next update check in 44m18s Mar 7 00:56:14.353245 locksmithd[1494]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 00:56:17.862246 systemd[1]: Started sshd@22-88.99.14.23:22-20.161.92.111:56036.service - OpenSSH per-connection server daemon (20.161.92.111:56036). Mar 7 00:56:18.454780 sshd[5928]: Accepted publickey for core from 20.161.92.111 port 56036 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:18.457366 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:18.461938 systemd-logind[1459]: New session 9 of user core. Mar 7 00:56:18.466027 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:56:18.946144 sshd[5928]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:18.952015 systemd[1]: sshd@22-88.99.14.23:22-20.161.92.111:56036.service: Deactivated successfully. Mar 7 00:56:18.955507 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:56:18.959476 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:56:18.961201 systemd-logind[1459]: Removed session 9. Mar 7 00:56:24.055222 systemd[1]: Started sshd@23-88.99.14.23:22-20.161.92.111:37218.service - OpenSSH per-connection server daemon (20.161.92.111:37218). Mar 7 00:56:24.643960 sshd[5942]: Accepted publickey for core from 20.161.92.111 port 37218 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:24.645768 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:24.651076 systemd-logind[1459]: New session 10 of user core. Mar 7 00:56:24.655178 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:56:25.167610 sshd[5942]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:25.172301 systemd[1]: sshd@23-88.99.14.23:22-20.161.92.111:37218.service: Deactivated successfully. Mar 7 00:56:25.175603 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:56:25.177520 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:56:25.178847 systemd-logind[1459]: Removed session 10. Mar 7 00:56:30.279345 systemd[1]: Started sshd@24-88.99.14.23:22-20.161.92.111:43404.service - OpenSSH per-connection server daemon (20.161.92.111:43404). Mar 7 00:56:30.880422 sshd[6015]: Accepted publickey for core from 20.161.92.111 port 43404 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:30.882660 sshd[6015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:30.887554 systemd-logind[1459]: New session 11 of user core. Mar 7 00:56:30.892090 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:56:31.390016 sshd[6015]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:31.395170 systemd[1]: sshd@24-88.99.14.23:22-20.161.92.111:43404.service: Deactivated successfully. Mar 7 00:56:31.397722 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:56:31.401261 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:56:31.403234 systemd-logind[1459]: Removed session 11. Mar 7 00:56:31.504368 systemd[1]: Started sshd@25-88.99.14.23:22-20.161.92.111:43408.service - OpenSSH per-connection server daemon (20.161.92.111:43408). Mar 7 00:56:32.104231 sshd[6049]: Accepted publickey for core from 20.161.92.111 port 43408 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:32.106651 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:32.114848 systemd-logind[1459]: New session 12 of user core. Mar 7 00:56:32.121179 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:56:32.652990 sshd[6049]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:32.659238 systemd[1]: sshd@25-88.99.14.23:22-20.161.92.111:43408.service: Deactivated successfully. Mar 7 00:56:32.664173 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:56:32.666258 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:56:32.667756 systemd-logind[1459]: Removed session 12. Mar 7 00:56:32.768228 systemd[1]: Started sshd@26-88.99.14.23:22-20.161.92.111:43410.service - OpenSSH per-connection server daemon (20.161.92.111:43410). Mar 7 00:56:33.355022 sshd[6060]: Accepted publickey for core from 20.161.92.111 port 43410 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:33.357092 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:33.364098 systemd-logind[1459]: New session 13 of user core. Mar 7 00:56:33.374200 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:56:33.852521 sshd[6060]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:33.857646 systemd[1]: sshd@26-88.99.14.23:22-20.161.92.111:43410.service: Deactivated successfully. Mar 7 00:56:33.861694 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:56:33.865449 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:56:33.867949 systemd-logind[1459]: Removed session 13. Mar 7 00:56:38.963429 systemd[1]: Started sshd@27-88.99.14.23:22-20.161.92.111:43414.service - OpenSSH per-connection server daemon (20.161.92.111:43414). Mar 7 00:56:39.551992 sshd[6115]: Accepted publickey for core from 20.161.92.111 port 43414 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:39.553667 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:39.560014 systemd-logind[1459]: New session 14 of user core. Mar 7 00:56:39.567342 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:56:40.063091 sshd[6115]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:40.069498 systemd[1]: sshd@27-88.99.14.23:22-20.161.92.111:43414.service: Deactivated successfully. Mar 7 00:56:40.074384 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:56:40.076884 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:56:40.078492 systemd-logind[1459]: Removed session 14. Mar 7 00:56:40.175255 systemd[1]: Started sshd@28-88.99.14.23:22-20.161.92.111:40836.service - OpenSSH per-connection server daemon (20.161.92.111:40836). Mar 7 00:56:40.762499 sshd[6130]: Accepted publickey for core from 20.161.92.111 port 40836 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:40.764714 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:40.769458 systemd-logind[1459]: New session 15 of user core. Mar 7 00:56:40.776123 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:56:41.404256 sshd[6130]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:41.408812 systemd[1]: sshd@28-88.99.14.23:22-20.161.92.111:40836.service: Deactivated successfully. Mar 7 00:56:41.411104 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:56:41.413616 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:56:41.414635 systemd-logind[1459]: Removed session 15. Mar 7 00:56:41.515469 systemd[1]: Started sshd@29-88.99.14.23:22-20.161.92.111:40840.service - OpenSSH per-connection server daemon (20.161.92.111:40840). Mar 7 00:56:42.110910 sshd[6141]: Accepted publickey for core from 20.161.92.111 port 40840 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:42.112462 sshd[6141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:42.117932 systemd-logind[1459]: New session 16 of user core. Mar 7 00:56:42.123131 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:56:43.204858 sshd[6141]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:43.212320 systemd[1]: sshd@29-88.99.14.23:22-20.161.92.111:40840.service: Deactivated successfully. Mar 7 00:56:43.215273 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:56:43.218091 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:56:43.219712 systemd-logind[1459]: Removed session 16. Mar 7 00:56:43.313353 systemd[1]: Started sshd@30-88.99.14.23:22-20.161.92.111:40852.service - OpenSSH per-connection server daemon (20.161.92.111:40852). Mar 7 00:56:43.903958 sshd[6165]: Accepted publickey for core from 20.161.92.111 port 40852 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:43.904934 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:43.914103 systemd-logind[1459]: New session 17 of user core. Mar 7 00:56:43.917074 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:56:44.529223 sshd[6165]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:44.533685 systemd[1]: sshd@30-88.99.14.23:22-20.161.92.111:40852.service: Deactivated successfully. Mar 7 00:56:44.537493 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:56:44.538605 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:56:44.540470 systemd-logind[1459]: Removed session 17. Mar 7 00:56:44.637156 systemd[1]: Started sshd@31-88.99.14.23:22-20.161.92.111:40866.service - OpenSSH per-connection server daemon (20.161.92.111:40866). Mar 7 00:56:45.221929 sshd[6178]: Accepted publickey for core from 20.161.92.111 port 40866 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:45.223440 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:45.228952 systemd-logind[1459]: New session 18 of user core. Mar 7 00:56:45.235220 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:56:45.714666 sshd[6178]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:45.719467 systemd[1]: sshd@31-88.99.14.23:22-20.161.92.111:40866.service: Deactivated successfully. Mar 7 00:56:45.721737 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:56:45.723376 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:56:45.725366 systemd-logind[1459]: Removed session 18. Mar 7 00:56:50.825195 systemd[1]: Started sshd@32-88.99.14.23:22-20.161.92.111:38878.service - OpenSSH per-connection server daemon (20.161.92.111:38878). Mar 7 00:56:51.420661 sshd[6195]: Accepted publickey for core from 20.161.92.111 port 38878 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:51.422811 sshd[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:51.428346 systemd-logind[1459]: New session 19 of user core. Mar 7 00:56:51.434106 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:56:51.926024 sshd[6195]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:51.934054 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:56:51.935047 systemd[1]: sshd@32-88.99.14.23:22-20.161.92.111:38878.service: Deactivated successfully. Mar 7 00:56:51.937861 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:56:51.939847 systemd-logind[1459]: Removed session 19. Mar 7 00:56:57.035181 systemd[1]: Started sshd@33-88.99.14.23:22-20.161.92.111:38894.service - OpenSSH per-connection server daemon (20.161.92.111:38894). Mar 7 00:56:57.621280 sshd[6230]: Accepted publickey for core from 20.161.92.111 port 38894 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:57.623654 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:57.629774 systemd-logind[1459]: New session 20 of user core. Mar 7 00:56:57.637213 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:56:58.117748 sshd[6230]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:58.124042 systemd[1]: sshd@33-88.99.14.23:22-20.161.92.111:38894.service: Deactivated successfully. Mar 7 00:56:58.126267 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:56:58.127431 systemd-logind[1459]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:56:58.129362 systemd-logind[1459]: Removed session 20. Mar 7 00:57:12.444320 systemd[1]: cri-containerd-d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6.scope: Deactivated successfully. Mar 7 00:57:12.446226 systemd[1]: cri-containerd-d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6.scope: Consumed 19.343s CPU time. Mar 7 00:57:12.482443 containerd[1479]: time="2026-03-07T00:57:12.482213943Z" level=info msg="shim disconnected" id=d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6 namespace=k8s.io Mar 7 00:57:12.482443 containerd[1479]: time="2026-03-07T00:57:12.482308345Z" level=warning msg="cleaning up after shim disconnected" id=d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6 namespace=k8s.io Mar 7 00:57:12.482443 containerd[1479]: time="2026-03-07T00:57:12.482320545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:12.484554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6-rootfs.mount: Deactivated successfully. Mar 7 00:57:12.709759 kubelet[2662]: E0307 00:57:12.709605 2662 controller.go:251] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39510->10.0.0.2:2379: read: connection timed out" Mar 7 00:57:12.885055 kubelet[2662]: I0307 00:57:12.885018 2662 scope.go:122] "RemoveContainer" containerID="d8107e6cb8dbb97ee9a4bd9dadee04415f9581bb137be60eef3075ee543d23e6" Mar 7 00:57:12.888378 containerd[1479]: time="2026-03-07T00:57:12.888339757Z" level=info msg="CreateContainer within sandbox \"40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 00:57:12.903460 containerd[1479]: time="2026-03-07T00:57:12.903327721Z" level=info msg="CreateContainer within sandbox \"40759e38bbfd3cfa2c805950f752b8e06a13eeca925ef9d6a5278b61993c6c7f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e1a2268c5cc5c75cd5d8c602d79ea2bfc93c970a0c91d01c24e8c5fe827829da\"" Mar 7 00:57:12.904772 containerd[1479]: time="2026-03-07T00:57:12.903891732Z" level=info msg="StartContainer for \"e1a2268c5cc5c75cd5d8c602d79ea2bfc93c970a0c91d01c24e8c5fe827829da\"" Mar 7 00:57:12.929756 systemd[1]: cri-containerd-1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e.scope: Deactivated successfully. Mar 7 00:57:12.930274 systemd[1]: cri-containerd-1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e.scope: Consumed 3.391s CPU time, 15.9M memory peak, 0B memory swap peak. Mar 7 00:57:12.950054 systemd[1]: Started cri-containerd-e1a2268c5cc5c75cd5d8c602d79ea2bfc93c970a0c91d01c24e8c5fe827829da.scope - libcontainer container e1a2268c5cc5c75cd5d8c602d79ea2bfc93c970a0c91d01c24e8c5fe827829da. Mar 7 00:57:12.976927 containerd[1479]: time="2026-03-07T00:57:12.975219763Z" level=info msg="shim disconnected" id=1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e namespace=k8s.io Mar 7 00:57:12.976927 containerd[1479]: time="2026-03-07T00:57:12.975490809Z" level=warning msg="cleaning up after shim disconnected" id=1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e namespace=k8s.io Mar 7 00:57:12.976927 containerd[1479]: time="2026-03-07T00:57:12.975502649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:12.976141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e-rootfs.mount: Deactivated successfully. Mar 7 00:57:12.988252 containerd[1479]: time="2026-03-07T00:57:12.985865925Z" level=info msg="StartContainer for \"e1a2268c5cc5c75cd5d8c602d79ea2bfc93c970a0c91d01c24e8c5fe827829da\" returns successfully" Mar 7 00:57:13.894697 kubelet[2662]: I0307 00:57:13.894660 2662 scope.go:122] "RemoveContainer" containerID="1843c2a9624d0905dfa85ca8dc074c5b03e6f48717166796c43f6058c337949e" Mar 7 00:57:13.897150 containerd[1479]: time="2026-03-07T00:57:13.897100025Z" level=info msg="CreateContainer within sandbox \"5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:57:13.918052 containerd[1479]: time="2026-03-07T00:57:13.917966855Z" level=info msg="CreateContainer within sandbox \"5404c9a14ed990719a5d29efc3bb7023ce0c2da5f101421b6ffcef32cb8a341f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f1453ab1a5f2a180cd548da17578d7da8f3e0ac35c9f687f58b4415a055551d2\"" Mar 7 00:57:13.918825 containerd[1479]: time="2026-03-07T00:57:13.918731269Z" level=info msg="StartContainer for \"f1453ab1a5f2a180cd548da17578d7da8f3e0ac35c9f687f58b4415a055551d2\"" Mar 7 00:57:13.957241 systemd[1]: Started cri-containerd-f1453ab1a5f2a180cd548da17578d7da8f3e0ac35c9f687f58b4415a055551d2.scope - libcontainer container f1453ab1a5f2a180cd548da17578d7da8f3e0ac35c9f687f58b4415a055551d2. Mar 7 00:57:13.998014 containerd[1479]: time="2026-03-07T00:57:13.997963389Z" level=info msg="StartContainer for \"f1453ab1a5f2a180cd548da17578d7da8f3e0ac35c9f687f58b4415a055551d2\" returns successfully" Mar 7 00:57:14.885651 kubelet[2662]: E0307 00:57:14.884475 2662 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:39210->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-d5610c1cbf.189a692823577e1e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-d5610c1cbf,UID:865b2e616385cd559867fa88bb7e6fbf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-d5610c1cbf,},FirstTimestamp:2026-03-07 00:57:04.436256286 +0000 UTC m=+204.706997220,LastTimestamp:2026-03-07 00:57:04.436256286 +0000 UTC m=+204.706997220,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-d5610c1cbf,}" Mar 7 00:57:19.188179 systemd[1]: cri-containerd-9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d.scope: Deactivated successfully. Mar 7 00:57:19.189149 systemd[1]: cri-containerd-9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d.scope: Consumed 2.774s CPU time, 16.4M memory peak, 0B memory swap peak. Mar 7 00:57:19.212910 containerd[1479]: time="2026-03-07T00:57:19.210960542Z" level=info msg="shim disconnected" id=9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d namespace=k8s.io Mar 7 00:57:19.212910 containerd[1479]: time="2026-03-07T00:57:19.211029264Z" level=warning msg="cleaning up after shim disconnected" id=9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d namespace=k8s.io Mar 7 00:57:19.212910 containerd[1479]: time="2026-03-07T00:57:19.211037784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:19.214928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f49c33a7363660e3d30af6f2b5e05df4d038b12863c19c896316746d1ebf42d-rootfs.mount: Deactivated successfully.