Mar 7 00:52:51.864618 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 00:52:51.864643 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:52:51.864655 kernel: KASLR enabled Mar 7 00:52:51.864661 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 7 00:52:51.864667 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 7 00:52:51.864712 kernel: random: crng init done Mar 7 00:52:51.864721 kernel: ACPI: Early table checksum verification disabled Mar 7 00:52:51.864727 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 7 00:52:51.864734 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 7 00:52:51.864743 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864750 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864757 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864763 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864770 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864778 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864786 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864793 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864800 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:51.864807 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 00:52:51.864814 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 7 00:52:51.864820 kernel: NUMA: Failed to initialise from firmware Mar 7 00:52:51.864827 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:52:51.864834 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 7 00:52:51.864841 kernel: Zone ranges: Mar 7 00:52:51.864848 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:52:51.864856 kernel: DMA32 empty Mar 7 00:52:51.864863 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 7 00:52:51.864870 kernel: Movable zone start for each node Mar 7 00:52:51.864876 kernel: Early memory node ranges Mar 7 00:52:51.864883 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 7 00:52:51.864890 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 7 00:52:51.864897 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 7 00:52:51.864904 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 7 00:52:51.864911 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 7 00:52:51.864918 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 7 00:52:51.864924 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 7 00:52:51.864931 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:52:51.864940 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 7 00:52:51.864946 kernel: psci: probing for conduit method from ACPI. Mar 7 00:52:51.866598 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 00:52:51.866618 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:52:51.866625 kernel: psci: Trusted OS migration not required Mar 7 00:52:51.866632 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:52:51.866646 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 7 00:52:51.866654 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:52:51.866661 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:52:51.866668 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:52:51.866675 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:52:51.866682 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:52:51.866689 kernel: CPU features: detected: Hardware dirty bit management Mar 7 00:52:51.866696 kernel: CPU features: detected: Spectre-v4 Mar 7 00:52:51.866703 kernel: CPU features: detected: Spectre-BHB Mar 7 00:52:51.866710 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 00:52:51.866719 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 00:52:51.866725 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 00:52:51.866732 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 00:52:51.866739 kernel: alternatives: applying boot alternatives Mar 7 00:52:51.866747 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:52:51.866754 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:52:51.866761 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:52:51.866768 kernel: Fallback order for Node 0: 0 Mar 7 00:52:51.866775 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 7 00:52:51.866782 kernel: Policy zone: Normal Mar 7 00:52:51.866789 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:52:51.866797 kernel: software IO TLB: area num 2. Mar 7 00:52:51.866804 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 7 00:52:51.866811 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Mar 7 00:52:51.866818 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:52:51.866825 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:52:51.866833 kernel: rcu: RCU event tracing is enabled. Mar 7 00:52:51.866840 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:52:51.866847 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:52:51.866854 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:52:51.866861 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:52:51.866868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:52:51.866874 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:52:51.866883 kernel: GICv3: 256 SPIs implemented Mar 7 00:52:51.866890 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:52:51.866896 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:52:51.866903 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 7 00:52:51.866910 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 7 00:52:51.866917 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 7 00:52:51.866924 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:52:51.866931 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:52:51.866938 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 7 00:52:51.866945 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 7 00:52:51.866952 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:52:51.866960 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:52:51.866967 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 00:52:51.866974 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 00:52:51.866981 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 00:52:51.866988 kernel: Console: colour dummy device 80x25 Mar 7 00:52:51.866995 kernel: ACPI: Core revision 20230628 Mar 7 00:52:51.867002 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 00:52:51.867009 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:52:51.867016 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:52:51.867023 kernel: landlock: Up and running. Mar 7 00:52:51.867032 kernel: SELinux: Initializing. Mar 7 00:52:51.867039 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:52:51.867046 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:52:51.867053 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:52:51.867060 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:52:51.867067 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:52:51.867074 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:52:51.867081 kernel: Platform MSI: ITS@0x8080000 domain created Mar 7 00:52:51.867088 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 7 00:52:51.867097 kernel: Remapping and enabling EFI services. Mar 7 00:52:51.867104 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:52:51.867111 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:52:51.867118 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 7 00:52:51.867125 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 7 00:52:51.867132 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:52:51.867139 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 00:52:51.867146 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:52:51.867153 kernel: SMP: Total of 2 processors activated. Mar 7 00:52:51.867160 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:52:51.867168 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 00:52:51.867176 kernel: CPU features: detected: Common not Private translations Mar 7 00:52:51.867188 kernel: CPU features: detected: CRC32 instructions Mar 7 00:52:51.867196 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 7 00:52:51.867204 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 00:52:51.867211 kernel: CPU features: detected: LSE atomic instructions Mar 7 00:52:51.867218 kernel: CPU features: detected: Privileged Access Never Mar 7 00:52:51.867226 kernel: CPU features: detected: RAS Extension Support Mar 7 00:52:51.867234 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 7 00:52:51.867242 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:52:51.867249 kernel: alternatives: applying system-wide alternatives Mar 7 00:52:51.867257 kernel: devtmpfs: initialized Mar 7 00:52:51.867264 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:52:51.867272 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:52:51.867279 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:52:51.867286 kernel: SMBIOS 3.0.0 present. Mar 7 00:52:51.867295 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 7 00:52:51.867303 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:52:51.867310 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:52:51.867318 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:52:51.867326 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:52:51.867333 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:52:51.867341 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Mar 7 00:52:51.867356 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:52:51.867364 kernel: cpuidle: using governor menu Mar 7 00:52:51.867374 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:52:51.867381 kernel: ASID allocator initialised with 32768 entries Mar 7 00:52:51.867389 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:52:51.867396 kernel: Serial: AMBA PL011 UART driver Mar 7 00:52:51.867403 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 00:52:51.867411 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 00:52:51.867418 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:52:51.867425 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:52:51.867433 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:52:51.867442 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:52:51.867449 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:52:51.867457 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:52:51.867464 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:52:51.867471 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:52:51.867479 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:52:51.867486 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:52:51.867494 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:52:51.867501 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:52:51.867510 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:52:51.867517 kernel: ACPI: Interpreter enabled Mar 7 00:52:51.867525 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:52:51.867541 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:52:51.867549 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 7 00:52:51.867556 kernel: printk: console [ttyAMA0] enabled Mar 7 00:52:51.867564 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 00:52:51.867712 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:52:51.867789 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:52:51.867857 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:52:51.867922 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 7 00:52:51.867985 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 7 00:52:51.867995 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 7 00:52:51.868002 kernel: PCI host bridge to bus 0000:00 Mar 7 00:52:51.868074 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 7 00:52:51.868135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:52:51.868201 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 7 00:52:51.868260 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 00:52:51.868359 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 7 00:52:51.868449 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 7 00:52:51.868520 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 7 00:52:51.869560 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:52:51.869711 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.869786 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 7 00:52:51.869872 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.869941 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 7 00:52:51.870014 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.870081 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 7 00:52:51.870158 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.870226 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 7 00:52:51.870302 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.870387 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 7 00:52:51.870464 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.871604 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 7 00:52:51.871743 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.871813 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 7 00:52:51.871886 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.871952 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 7 00:52:51.872025 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:51.872092 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 7 00:52:51.872178 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 7 00:52:51.872246 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 7 00:52:51.872325 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:52:51.872421 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 7 00:52:51.872493 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:52:51.873653 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:52:51.873752 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 7 00:52:51.873830 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 7 00:52:51.873908 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 7 00:52:51.873977 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 7 00:52:51.874047 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 7 00:52:51.874124 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 7 00:52:51.874193 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 7 00:52:51.874270 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 7 00:52:51.874385 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 7 00:52:51.874473 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 7 00:52:51.874589 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 7 00:52:51.874666 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 7 00:52:51.874736 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:52:51.874825 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:52:51.874902 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 7 00:52:51.874969 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 7 00:52:51.875037 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:52:51.875108 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 7 00:52:51.875175 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:52:51.875241 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:52:51.875314 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 7 00:52:51.875419 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 7 00:52:51.875490 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 7 00:52:51.876667 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 7 00:52:51.876764 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:52:51.876833 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:52:51.876905 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 7 00:52:51.876972 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 7 00:52:51.877044 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 7 00:52:51.877113 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 7 00:52:51.877419 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:52:51.877503 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:52:51.877592 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 7 00:52:51.877662 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:52:51.877727 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:52:51.877802 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 7 00:52:51.877870 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:52:51.877934 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:52:51.878003 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 7 00:52:51.878067 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:52:51.878132 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:52:51.878201 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 7 00:52:51.878266 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:52:51.878336 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:52:51.878433 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 7 00:52:51.878515 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:51.878978 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 7 00:52:51.879053 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:51.879121 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 7 00:52:51.879187 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:51.879261 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 7 00:52:51.879328 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:51.879449 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 7 00:52:51.879521 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:51.879613 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 7 00:52:51.879680 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:51.879752 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 7 00:52:51.879817 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:51.879884 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 7 00:52:51.879951 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:51.880018 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 7 00:52:51.880084 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:51.880155 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 7 00:52:51.880224 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 7 00:52:51.880291 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 7 00:52:51.880372 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 7 00:52:51.880444 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 7 00:52:51.880512 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 7 00:52:51.880665 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 7 00:52:51.880739 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 7 00:52:51.880807 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 7 00:52:51.880877 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 7 00:52:51.880944 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 7 00:52:51.881009 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 7 00:52:51.881075 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 7 00:52:51.881141 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 7 00:52:51.881207 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 7 00:52:51.881273 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 7 00:52:51.881339 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 7 00:52:51.881434 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 7 00:52:51.881503 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 7 00:52:51.881601 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 7 00:52:51.881676 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 7 00:52:51.881772 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 7 00:52:51.881843 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:52:51.881914 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 7 00:52:51.881980 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 7 00:52:51.882050 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 7 00:52:51.882115 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 7 00:52:51.882181 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:51.882254 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 7 00:52:51.882324 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 7 00:52:51.882432 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 7 00:52:51.882502 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 7 00:52:51.882640 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:51.882717 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:52:51.882785 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 7 00:52:51.882849 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 7 00:52:51.882914 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 7 00:52:51.882984 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 7 00:52:51.883048 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:51.883120 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:52:51.883186 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 7 00:52:51.883250 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 7 00:52:51.883314 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 7 00:52:51.883395 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:51.883471 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 7 00:52:51.885625 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 7 00:52:51.885729 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 7 00:52:51.885798 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 7 00:52:51.885864 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 7 00:52:51.885929 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:51.886003 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 7 00:52:51.886070 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 7 00:52:51.886136 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 7 00:52:51.886209 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 7 00:52:51.886275 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 7 00:52:51.886340 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:51.886435 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 7 00:52:51.886505 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 7 00:52:51.887668 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 7 00:52:51.887753 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 7 00:52:51.887822 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 7 00:52:51.887899 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 7 00:52:51.887965 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:51.888032 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 7 00:52:51.888100 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 7 00:52:51.888166 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 7 00:52:51.888232 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:51.888300 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 7 00:52:51.888392 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 7 00:52:51.888469 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 7 00:52:51.889580 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:51.889676 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 7 00:52:51.889764 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:52:51.889825 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 7 00:52:51.889906 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 7 00:52:51.889969 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 7 00:52:51.890035 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:51.890106 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 7 00:52:51.890168 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 7 00:52:51.890231 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:51.890300 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 7 00:52:51.890378 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 7 00:52:51.891321 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:51.891454 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 7 00:52:51.891521 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 7 00:52:51.892696 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:51.892775 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 7 00:52:51.892838 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 7 00:52:51.892902 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:51.892974 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 7 00:52:51.893034 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 7 00:52:51.893098 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:51.893168 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 7 00:52:51.893231 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 7 00:52:51.893292 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:51.893384 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 7 00:52:51.893451 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 7 00:52:51.893512 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:51.893684 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 7 00:52:51.893751 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 7 00:52:51.893817 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:51.893828 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:52:51.893836 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:52:51.893844 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:52:51.893852 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:52:51.893860 kernel: iommu: Default domain type: Translated Mar 7 00:52:51.893868 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:52:51.893876 kernel: efivars: Registered efivars operations Mar 7 00:52:51.893884 kernel: vgaarb: loaded Mar 7 00:52:51.893893 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:52:51.893901 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:52:51.893909 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:52:51.893919 kernel: pnp: PnP ACPI init Mar 7 00:52:51.893998 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 7 00:52:51.894010 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:52:51.894018 kernel: NET: Registered PF_INET protocol family Mar 7 00:52:51.894026 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:52:51.894036 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:52:51.894044 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:52:51.894053 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:52:51.894060 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:52:51.894068 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:52:51.894076 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:52:51.894084 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:52:51.894092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:52:51.894168 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 7 00:52:51.894181 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:52:51.894189 kernel: kvm [1]: HYP mode not available Mar 7 00:52:51.894197 kernel: Initialise system trusted keyrings Mar 7 00:52:51.894205 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:52:51.894213 kernel: Key type asymmetric registered Mar 7 00:52:51.894221 kernel: Asymmetric key parser 'x509' registered Mar 7 00:52:51.894229 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:52:51.894237 kernel: io scheduler mq-deadline registered Mar 7 00:52:51.894245 kernel: io scheduler kyber registered Mar 7 00:52:51.894254 kernel: io scheduler bfq registered Mar 7 00:52:51.894263 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:52:51.894333 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 7 00:52:51.894451 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 7 00:52:51.894521 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.894606 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 7 00:52:51.894674 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 7 00:52:51.894746 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.894814 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 7 00:52:51.894880 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 7 00:52:51.894947 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.895015 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 7 00:52:51.895082 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 7 00:52:51.895151 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.895219 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 7 00:52:51.895285 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 7 00:52:51.895364 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.895438 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 7 00:52:51.895505 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 7 00:52:51.897764 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.897851 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 7 00:52:51.897919 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 7 00:52:51.897989 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.898067 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 7 00:52:51.898140 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 7 00:52:51.898219 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.898232 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 7 00:52:51.898308 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 7 00:52:51.898400 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 7 00:52:51.898470 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:51.898481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:52:51.898493 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:52:51.898501 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:52:51.898605 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 7 00:52:51.898683 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 7 00:52:51.898695 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:52:51.898703 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:52:51.898772 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 7 00:52:51.898782 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 7 00:52:51.898790 kernel: thunder_xcv, ver 1.0 Mar 7 00:52:51.898801 kernel: thunder_bgx, ver 1.0 Mar 7 00:52:51.898810 kernel: nicpf, ver 1.0 Mar 7 00:52:51.898817 kernel: nicvf, ver 1.0 Mar 7 00:52:51.898896 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:52:51.898963 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:52:51 UTC (1772844771) Mar 7 00:52:51.898973 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:52:51.898982 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 7 00:52:51.898990 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:52:51.899000 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:52:51.899008 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:52:51.899016 kernel: Segment Routing with IPv6 Mar 7 00:52:51.899023 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:52:51.899031 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:52:51.899039 kernel: Key type dns_resolver registered Mar 7 00:52:51.899047 kernel: registered taskstats version 1 Mar 7 00:52:51.899054 kernel: Loading compiled-in X.509 certificates Mar 7 00:52:51.899062 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:52:51.899072 kernel: Key type .fscrypt registered Mar 7 00:52:51.899079 kernel: Key type fscrypt-provisioning registered Mar 7 00:52:51.899087 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:52:51.899095 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:52:51.899103 kernel: ima: No architecture policies found Mar 7 00:52:51.899111 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:52:51.899118 kernel: clk: Disabling unused clocks Mar 7 00:52:51.899126 kernel: Freeing unused kernel memory: 39424K Mar 7 00:52:51.899134 kernel: Run /init as init process Mar 7 00:52:51.899142 kernel: with arguments: Mar 7 00:52:51.899151 kernel: /init Mar 7 00:52:51.899158 kernel: with environment: Mar 7 00:52:51.899166 kernel: HOME=/ Mar 7 00:52:51.899173 kernel: TERM=linux Mar 7 00:52:51.899183 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:52:51.899194 systemd[1]: Detected virtualization kvm. Mar 7 00:52:51.899202 systemd[1]: Detected architecture arm64. Mar 7 00:52:51.899211 systemd[1]: Running in initrd. Mar 7 00:52:51.899220 systemd[1]: No hostname configured, using default hostname. Mar 7 00:52:51.899227 systemd[1]: Hostname set to . Mar 7 00:52:51.899236 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:52:51.899244 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:52:51.899252 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:51.899261 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:51.899270 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:52:51.899280 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:52:51.899288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:52:51.899297 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:52:51.899306 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:52:51.899315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:52:51.899323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:51.899332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:51.899342 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:52:51.899386 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:52:51.899395 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:52:51.899403 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:52:51.899411 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:52:51.899419 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:52:51.899428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:52:51.899436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:52:51.899445 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:51.899456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:51.899465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:51.899473 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:52:51.899481 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:52:51.899490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:52:51.899500 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:52:51.899508 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:52:51.899517 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:52:51.899526 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:52:51.900512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:51.900522 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:52:51.900543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:51.900591 systemd-journald[236]: Collecting audit messages is disabled. Mar 7 00:52:51.900618 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:52:51.900628 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:52:51.900637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:51.900647 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:52:51.900656 kernel: Bridge firewalling registered Mar 7 00:52:51.900665 systemd-journald[236]: Journal started Mar 7 00:52:51.900684 systemd-journald[236]: Runtime Journal (/run/log/journal/caf31918ebd742848f67c15f488ef519) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:52:51.880637 systemd-modules-load[237]: Inserted module 'overlay' Mar 7 00:52:51.904594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:51.900097 systemd-modules-load[237]: Inserted module 'br_netfilter' Mar 7 00:52:51.906929 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:52:51.907448 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:51.908826 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:51.916756 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:52:51.920619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:52:51.928682 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:52:51.930766 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:51.934902 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:51.947857 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:52:51.950143 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:51.952321 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:51.957223 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:52:51.969990 dracut-cmdline[271]: dracut-dracut-053 Mar 7 00:52:51.973968 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:52:52.000396 systemd-resolved[276]: Positive Trust Anchors: Mar 7 00:52:52.001019 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:52:52.001053 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:52:52.011406 systemd-resolved[276]: Defaulting to hostname 'linux'. Mar 7 00:52:52.012547 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:52:52.013185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:52.066565 kernel: SCSI subsystem initialized Mar 7 00:52:52.070570 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:52:52.078613 kernel: iscsi: registered transport (tcp) Mar 7 00:52:52.091597 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:52:52.091672 kernel: QLogic iSCSI HBA Driver Mar 7 00:52:52.141881 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:52:52.150791 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:52:52.171103 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:52:52.171235 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:52:52.171279 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:52:52.220583 kernel: raid6: neonx8 gen() 15632 MB/s Mar 7 00:52:52.237600 kernel: raid6: neonx4 gen() 13192 MB/s Mar 7 00:52:52.254585 kernel: raid6: neonx2 gen() 13105 MB/s Mar 7 00:52:52.271590 kernel: raid6: neonx1 gen() 10432 MB/s Mar 7 00:52:52.288592 kernel: raid6: int64x8 gen() 6925 MB/s Mar 7 00:52:52.305591 kernel: raid6: int64x4 gen() 7291 MB/s Mar 7 00:52:52.322598 kernel: raid6: int64x2 gen() 6096 MB/s Mar 7 00:52:52.339606 kernel: raid6: int64x1 gen() 5025 MB/s Mar 7 00:52:52.339690 kernel: raid6: using algorithm neonx8 gen() 15632 MB/s Mar 7 00:52:52.356644 kernel: raid6: .... xor() 11931 MB/s, rmw enabled Mar 7 00:52:52.356731 kernel: raid6: using neon recovery algorithm Mar 7 00:52:52.361754 kernel: xor: measuring software checksum speed Mar 7 00:52:52.361809 kernel: 8regs : 19759 MB/sec Mar 7 00:52:52.361831 kernel: 32regs : 18825 MB/sec Mar 7 00:52:52.362574 kernel: arm64_neon : 27043 MB/sec Mar 7 00:52:52.362607 kernel: xor: using function: arm64_neon (27043 MB/sec) Mar 7 00:52:52.412607 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:52:52.427477 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:52:52.432751 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:52.460566 systemd-udevd[457]: Using default interface naming scheme 'v255'. Mar 7 00:52:52.463897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:52.471750 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:52:52.488404 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Mar 7 00:52:52.528595 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:52:52.535731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:52:52.586471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:52.593721 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:52:52.614334 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:52:52.616749 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:52:52.617426 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:52.620003 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:52:52.629828 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:52:52.646903 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:52:52.691777 kernel: scsi host0: Virtio SCSI HBA Mar 7 00:52:52.693762 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 00:52:52.693851 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 00:52:52.709581 kernel: ACPI: bus type USB registered Mar 7 00:52:52.709650 kernel: usbcore: registered new interface driver usbfs Mar 7 00:52:52.709663 kernel: usbcore: registered new interface driver hub Mar 7 00:52:52.711211 kernel: usbcore: registered new device driver usb Mar 7 00:52:52.720120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:52:52.720240 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:52.721118 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:52.721715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:52.721845 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:52.722766 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:52.735779 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:52.752586 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 7 00:52:52.753445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:52.757588 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:52:52.757812 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 7 00:52:52.757971 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 00:52:52.757985 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 7 00:52:52.760612 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 7 00:52:52.760914 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:52.766546 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 7 00:52:52.768881 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:52:52.769050 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 7 00:52:52.773574 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 7 00:52:52.776601 kernel: hub 1-0:1.0: USB hub found Mar 7 00:52:52.776842 kernel: hub 1-0:1.0: 4 ports detected Mar 7 00:52:52.776987 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 7 00:52:52.779074 kernel: hub 2-0:1.0: USB hub found Mar 7 00:52:52.779259 kernel: hub 2-0:1.0: 4 ports detected Mar 7 00:52:52.780810 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 7 00:52:52.780986 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 7 00:52:52.781727 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 7 00:52:52.781851 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 7 00:52:52.781935 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 00:52:52.786725 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:52:52.786761 kernel: GPT:17805311 != 80003071 Mar 7 00:52:52.786771 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:52:52.786787 kernel: GPT:17805311 != 80003071 Mar 7 00:52:52.786796 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:52:52.787558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:52.787593 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 7 00:52:52.795266 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:52.833586 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (507) Mar 7 00:52:52.839574 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (508) Mar 7 00:52:52.849026 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 00:52:52.853927 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 00:52:52.858947 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:52:52.865640 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 00:52:52.866305 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 00:52:52.874703 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:52:52.897669 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:52.898194 disk-uuid[574]: Primary Header is updated. Mar 7 00:52:52.898194 disk-uuid[574]: Secondary Entries is updated. Mar 7 00:52:52.898194 disk-uuid[574]: Secondary Header is updated. Mar 7 00:52:53.018155 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 7 00:52:53.155194 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 7 00:52:53.155248 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 7 00:52:53.155927 kernel: usbcore: registered new interface driver usbhid Mar 7 00:52:53.156542 kernel: usbhid: USB HID core driver Mar 7 00:52:53.263570 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 7 00:52:53.394551 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 7 00:52:53.447596 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 7 00:52:53.917120 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:53.917405 disk-uuid[575]: The operation has completed successfully. Mar 7 00:52:53.977345 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:52:53.978208 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:52:53.993691 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:52:54.012291 sh[592]: Success Mar 7 00:52:54.027777 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:52:54.092180 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:52:54.094569 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:52:54.100687 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:52:54.117771 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:52:54.117851 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:54.117879 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:52:54.118862 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:52:54.118925 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:52:54.125584 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:52:54.128086 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:52:54.128787 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:52:54.134712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:52:54.137712 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:52:54.154964 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:54.155019 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:54.155030 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:54.160557 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:54.160624 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:54.174061 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:52:54.175828 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:54.184351 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:52:54.194847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:52:54.266078 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:52:54.274697 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:52:54.286116 ignition[695]: Ignition 2.19.0 Mar 7 00:52:54.286132 ignition[695]: Stage: fetch-offline Mar 7 00:52:54.286186 ignition[695]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:54.286196 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:54.288813 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:52:54.286376 ignition[695]: parsed url from cmdline: "" Mar 7 00:52:54.286379 ignition[695]: no config URL provided Mar 7 00:52:54.286384 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:52:54.286395 ignition[695]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:52:54.286400 ignition[695]: failed to fetch config: resource requires networking Mar 7 00:52:54.287168 ignition[695]: Ignition finished successfully Mar 7 00:52:54.298623 systemd-networkd[778]: lo: Link UP Mar 7 00:52:54.298631 systemd-networkd[778]: lo: Gained carrier Mar 7 00:52:54.300166 systemd-networkd[778]: Enumeration completed Mar 7 00:52:54.300280 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:52:54.300701 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:54.300704 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:54.301520 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:54.301522 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:54.302125 systemd-networkd[778]: eth0: Link UP Mar 7 00:52:54.302128 systemd-networkd[778]: eth0: Gained carrier Mar 7 00:52:54.302136 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:54.303065 systemd[1]: Reached target network.target - Network. Mar 7 00:52:54.307175 systemd-networkd[778]: eth1: Link UP Mar 7 00:52:54.307178 systemd-networkd[778]: eth1: Gained carrier Mar 7 00:52:54.307185 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:54.309800 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:52:54.324197 ignition[781]: Ignition 2.19.0 Mar 7 00:52:54.324207 ignition[781]: Stage: fetch Mar 7 00:52:54.324391 ignition[781]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:54.324400 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:54.324495 ignition[781]: parsed url from cmdline: "" Mar 7 00:52:54.324498 ignition[781]: no config URL provided Mar 7 00:52:54.324503 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:52:54.324510 ignition[781]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:52:54.324543 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 7 00:52:54.325111 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 00:52:54.343667 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:52:54.354628 systemd-networkd[778]: eth0: DHCPv4 address 188.245.55.131/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:52:54.525300 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 7 00:52:54.531442 ignition[781]: GET result: OK Mar 7 00:52:54.531523 ignition[781]: parsing config with SHA512: 9ba833a7a28b07a0ce7d71de2e584d7180974f9ee85f952e8c44afcf25fe1d3d617cd3b20db5f289c94e262fda8725529412801a41877c2b0f2c524646ab5174 Mar 7 00:52:54.538043 unknown[781]: fetched base config from "system" Mar 7 00:52:54.538061 unknown[781]: fetched base config from "system" Mar 7 00:52:54.538076 unknown[781]: fetched user config from "hetzner" Mar 7 00:52:54.539418 ignition[781]: fetch: fetch complete Mar 7 00:52:54.539425 ignition[781]: fetch: fetch passed Mar 7 00:52:54.539490 ignition[781]: Ignition finished successfully Mar 7 00:52:54.543575 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:52:54.549716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:52:54.563048 ignition[788]: Ignition 2.19.0 Mar 7 00:52:54.563058 ignition[788]: Stage: kargs Mar 7 00:52:54.563240 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:54.563251 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:54.564383 ignition[788]: kargs: kargs passed Mar 7 00:52:54.568463 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:52:54.564438 ignition[788]: Ignition finished successfully Mar 7 00:52:54.575729 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:52:54.591110 ignition[795]: Ignition 2.19.0 Mar 7 00:52:54.591131 ignition[795]: Stage: disks Mar 7 00:52:54.591460 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:54.591480 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:54.593512 ignition[795]: disks: disks passed Mar 7 00:52:54.595392 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:52:54.593659 ignition[795]: Ignition finished successfully Mar 7 00:52:54.596500 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:52:54.597122 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:52:54.597785 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:52:54.599066 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:52:54.602661 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:52:54.611770 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:52:54.628974 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 00:52:54.631926 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:52:54.637871 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:52:54.685563 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:52:54.687182 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:52:54.689777 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:52:54.706781 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:52:54.712331 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:52:54.714482 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 00:52:54.716392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:52:54.716443 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:52:54.726208 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (812) Mar 7 00:52:54.726238 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:54.727618 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:54.727653 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:54.727601 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:52:54.737789 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:52:54.742598 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:54.742637 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:54.747526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:52:54.780569 coreos-metadata[814]: Mar 07 00:52:54.780 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 7 00:52:54.781828 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:52:54.783697 coreos-metadata[814]: Mar 07 00:52:54.783 INFO Fetch successful Mar 7 00:52:54.783697 coreos-metadata[814]: Mar 07 00:52:54.783 INFO wrote hostname ci-4081-3-6-n-4bed64c074 to /sysroot/etc/hostname Mar 7 00:52:54.788585 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:52:54.791981 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:52:54.797622 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:52:54.803630 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:52:54.907194 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:52:54.912728 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:52:54.915728 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:52:54.925565 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:54.949732 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:52:54.952619 ignition[930]: INFO : Ignition 2.19.0 Mar 7 00:52:54.952619 ignition[930]: INFO : Stage: mount Mar 7 00:52:54.952619 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:54.952619 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:54.955682 ignition[930]: INFO : mount: mount passed Mar 7 00:52:54.955682 ignition[930]: INFO : Ignition finished successfully Mar 7 00:52:54.957664 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:52:54.963760 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:52:55.119496 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:52:55.127884 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:52:55.138587 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Mar 7 00:52:55.140551 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:55.140609 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:55.140626 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:55.143561 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:55.143633 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:55.145943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:52:55.175629 ignition[959]: INFO : Ignition 2.19.0 Mar 7 00:52:55.175629 ignition[959]: INFO : Stage: files Mar 7 00:52:55.175629 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:55.175629 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:55.180393 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:52:55.180393 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:52:55.180393 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:52:55.183395 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:52:55.183395 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:52:55.183395 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:52:55.181564 unknown[959]: wrote ssh authorized keys file for user: core Mar 7 00:52:55.186725 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 00:52:55.186725 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 00:52:55.186725 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:52:55.186725 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:52:55.258922 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 00:52:55.330201 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:52:55.330201 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:55.334253 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 00:52:55.443016 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 00:52:55.459991 systemd-networkd[778]: eth1: Gained IPv6LL Mar 7 00:52:55.710935 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:55.710935 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:52:55.713433 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:52:55.713433 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:52:55.713433 ignition[959]: INFO : files: files passed Mar 7 00:52:55.713433 ignition[959]: INFO : Ignition finished successfully Mar 7 00:52:55.714913 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:52:55.726158 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:52:55.728810 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:52:55.738897 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:52:55.739563 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:52:55.748438 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:55.748438 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:55.751243 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:55.755582 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:52:55.756473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:52:55.765025 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:52:55.779746 systemd-networkd[778]: eth0: Gained IPv6LL Mar 7 00:52:55.796725 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:52:55.797957 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:52:55.799162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:52:55.800785 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:52:55.802127 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:52:55.807832 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:52:55.832823 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:52:55.838760 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:52:55.852030 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:55.852823 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:55.853972 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:52:55.855024 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:52:55.855148 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:52:55.856615 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:52:55.857205 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:52:55.858250 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:52:55.859324 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:52:55.860331 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:52:55.861402 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:52:55.862490 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:52:55.863697 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:52:55.864684 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:52:55.865791 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:52:55.866673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:52:55.866795 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:52:55.868061 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:55.868739 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:55.869853 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:52:55.869927 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:55.871073 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:52:55.871195 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:52:55.872714 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:52:55.872830 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:52:55.874030 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:52:55.874120 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:52:55.875191 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 00:52:55.875279 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:52:55.886945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:52:55.891746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:52:55.892415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:52:55.892582 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:55.894704 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:52:55.894970 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:52:55.906914 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:52:55.907820 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:52:55.913566 ignition[1011]: INFO : Ignition 2.19.0 Mar 7 00:52:55.913566 ignition[1011]: INFO : Stage: umount Mar 7 00:52:55.913566 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:55.913566 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:55.921943 ignition[1011]: INFO : umount: umount passed Mar 7 00:52:55.921943 ignition[1011]: INFO : Ignition finished successfully Mar 7 00:52:55.918431 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:52:55.918601 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:52:55.927314 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:52:55.930113 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:52:55.930170 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:52:55.932846 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:52:55.932924 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:52:55.934414 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:52:55.934466 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:52:55.937603 systemd[1]: Stopped target network.target - Network. Mar 7 00:52:55.941610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:52:55.941691 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:52:55.944412 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:52:55.946451 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:52:55.948809 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:55.949606 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:52:55.950108 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:52:55.951992 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:52:55.952037 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:52:55.953147 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:52:55.953187 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:52:55.954174 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:52:55.954221 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:52:55.955250 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:52:55.955293 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:52:55.956391 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:52:55.957403 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:52:55.958247 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:52:55.958382 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:52:55.959832 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:52:55.959892 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:52:55.962601 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:52:55.962718 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:52:55.963626 systemd-networkd[778]: eth1: DHCPv6 lease lost Mar 7 00:52:55.965513 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:52:55.965709 systemd-networkd[778]: eth0: DHCPv6 lease lost Mar 7 00:52:55.965712 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:55.967748 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:52:55.968904 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:52:55.969727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:52:55.969759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:55.973701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:52:55.975474 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:52:55.975609 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:52:55.977212 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:52:55.977279 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:55.978152 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:52:55.978196 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:55.979427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:55.996422 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:52:55.998630 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:56.000248 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:52:56.000375 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:56.001582 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:52:56.001615 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:56.003149 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:52:56.003207 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:52:56.005028 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:52:56.005078 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:52:56.006155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:52:56.006198 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:56.011796 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:52:56.012401 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:52:56.012466 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:56.013686 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 7 00:52:56.013734 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:56.014876 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:52:56.014917 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:56.016413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:56.016461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:56.017635 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:52:56.017744 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:52:56.024475 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:52:56.024610 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:52:56.027645 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:52:56.043948 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:52:56.059403 systemd[1]: Switching root. Mar 7 00:52:56.095653 systemd-journald[236]: Journal stopped Mar 7 00:52:56.960952 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Mar 7 00:52:56.961017 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:52:56.961035 kernel: SELinux: policy capability open_perms=1 Mar 7 00:52:56.961044 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:52:56.961057 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:52:56.961068 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:52:56.961078 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:52:56.961088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:52:56.961097 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:52:56.961107 kernel: audit: type=1403 audit(1772844776.269:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:52:56.961118 systemd[1]: Successfully loaded SELinux policy in 34.610ms. Mar 7 00:52:56.961141 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.008ms. Mar 7 00:52:56.961153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:52:56.961166 systemd[1]: Detected virtualization kvm. Mar 7 00:52:56.961176 systemd[1]: Detected architecture arm64. Mar 7 00:52:56.961187 systemd[1]: Detected first boot. Mar 7 00:52:56.961198 systemd[1]: Hostname set to . Mar 7 00:52:56.961211 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:52:56.961222 zram_generator::config[1077]: No configuration found. Mar 7 00:52:56.961233 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:52:56.961245 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:52:56.961255 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 00:52:56.961266 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:52:56.961280 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:52:56.961327 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:52:56.961340 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:52:56.961353 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:52:56.961363 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:52:56.961375 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:52:56.961387 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:52:56.961398 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:56.961409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:56.961420 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:52:56.961430 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:52:56.961441 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:52:56.961451 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:52:56.961462 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 00:52:56.961472 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:56.961484 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:52:56.961495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:56.961506 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:52:56.961516 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:52:56.961526 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:52:56.961566 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:52:56.961580 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:52:56.961590 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:52:56.961601 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:52:56.961613 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:56.961624 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:56.961635 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:56.961645 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:52:56.961656 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:52:56.961666 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:52:56.961681 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:52:56.961695 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:52:56.961708 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:52:56.961718 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:52:56.961729 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:52:56.961740 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:56.961755 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:52:56.961768 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:52:56.961779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:56.961790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:52:56.961801 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:56.961812 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:52:56.961822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:56.961837 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:52:56.961848 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 00:52:56.961863 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 00:52:56.961874 kernel: fuse: init (API version 7.39) Mar 7 00:52:56.961885 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:52:56.961895 kernel: loop: module loaded Mar 7 00:52:56.961905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:52:56.961915 kernel: ACPI: bus type drm_connector registered Mar 7 00:52:56.961925 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:52:56.961936 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:52:56.961948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:52:56.961979 systemd-journald[1158]: Collecting audit messages is disabled. Mar 7 00:52:56.962002 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:52:56.962014 systemd-journald[1158]: Journal started Mar 7 00:52:56.962036 systemd-journald[1158]: Runtime Journal (/run/log/journal/caf31918ebd742848f67c15f488ef519) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:52:56.963874 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:52:56.965634 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:52:56.967574 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:52:56.968703 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:52:56.969385 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:52:56.970749 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:52:56.971559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:56.975898 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:52:56.976072 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:52:56.978016 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:56.978174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:56.979611 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:52:56.979763 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:52:56.980923 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:56.981068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:56.982465 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:52:56.982750 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:52:56.983788 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:56.984226 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:56.988423 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:56.993260 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:52:56.994303 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:52:57.000890 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:52:57.007226 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:52:57.013705 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:52:57.020962 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:52:57.021670 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:52:57.025788 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:52:57.033677 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:52:57.034316 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:57.045812 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:52:57.046452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:57.051064 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:52:57.055918 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:52:57.064669 systemd-journald[1158]: Time spent on flushing to /var/log/journal/caf31918ebd742848f67c15f488ef519 is 46.126ms for 1110 entries. Mar 7 00:52:57.064669 systemd-journald[1158]: System Journal (/var/log/journal/caf31918ebd742848f67c15f488ef519) is 8.0M, max 584.8M, 576.8M free. Mar 7 00:52:57.126048 systemd-journald[1158]: Received client request to flush runtime journal. Mar 7 00:52:57.067709 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:57.071816 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:52:57.072795 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:52:57.089829 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:52:57.091887 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:52:57.093336 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:52:57.122155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:57.124748 udevadm[1217]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 00:52:57.128414 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Mar 7 00:52:57.128432 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Mar 7 00:52:57.130694 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:52:57.135654 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:57.141886 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:52:57.176019 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:52:57.186797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:52:57.201059 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 7 00:52:57.201081 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Mar 7 00:52:57.207080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:57.556457 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:52:57.563797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:57.586748 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Mar 7 00:52:57.606304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:57.618476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:52:57.631707 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:52:57.683104 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Mar 7 00:52:57.707414 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:52:57.793571 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 00:52:57.816027 systemd-networkd[1246]: lo: Link UP Mar 7 00:52:57.816036 systemd-networkd[1246]: lo: Gained carrier Mar 7 00:52:57.818142 systemd-networkd[1246]: Enumeration completed Mar 7 00:52:57.818654 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:52:57.821466 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:57.821541 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:57.823036 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:57.823116 systemd-networkd[1246]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:57.824058 systemd-networkd[1246]: eth0: Link UP Mar 7 00:52:57.824126 systemd-networkd[1246]: eth0: Gained carrier Mar 7 00:52:57.824182 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:57.825732 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:52:57.831643 systemd-networkd[1246]: eth1: Link UP Mar 7 00:52:57.833483 systemd-networkd[1246]: eth1: Gained carrier Mar 7 00:52:57.833503 systemd-networkd[1246]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:57.851556 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1257) Mar 7 00:52:57.875231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:57.877594 systemd-networkd[1246]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:52:57.881762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:57.884462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:57.888089 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:57.888769 systemd-networkd[1246]: eth0: DHCPv4 address 188.245.55.131/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:52:57.889745 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:52:57.889797 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:52:57.939994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:57.940169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:57.943327 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:57.948973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:57.949166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:57.951205 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:57.951525 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:57.958451 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:52:57.958623 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 7 00:52:57.958648 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 7 00:52:57.958660 kernel: [drm] features: -context_init Mar 7 00:52:57.959710 kernel: [drm] number of scanouts: 1 Mar 7 00:52:57.959748 kernel: [drm] number of cap sets: 0 Mar 7 00:52:57.961032 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 7 00:52:57.960557 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:57.966829 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 00:52:57.967784 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:57.974589 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 7 00:52:57.980248 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:57.980496 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:57.982736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:58.043107 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:58.101658 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:52:58.111802 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:52:58.124698 lvm[1308]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:52:58.160263 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:52:58.162858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:58.168810 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:52:58.174825 lvm[1311]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:52:58.203257 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:52:58.205144 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:52:58.205903 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:52:58.205928 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:52:58.206769 systemd[1]: Reached target machines.target - Containers. Mar 7 00:52:58.208661 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:52:58.216805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:52:58.220715 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:52:58.222738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:58.229823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:52:58.235734 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:52:58.239965 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:52:58.241448 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:52:58.259105 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:52:58.270752 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:52:58.274303 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:52:58.274663 kernel: loop0: detected capacity change from 0 to 114432 Mar 7 00:52:58.292553 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:52:58.312770 kernel: loop1: detected capacity change from 0 to 209336 Mar 7 00:52:58.352576 kernel: loop2: detected capacity change from 0 to 114328 Mar 7 00:52:58.389606 kernel: loop3: detected capacity change from 0 to 8 Mar 7 00:52:58.409697 kernel: loop4: detected capacity change from 0 to 114432 Mar 7 00:52:58.424243 kernel: loop5: detected capacity change from 0 to 209336 Mar 7 00:52:58.439558 kernel: loop6: detected capacity change from 0 to 114328 Mar 7 00:52:58.452591 kernel: loop7: detected capacity change from 0 to 8 Mar 7 00:52:58.452741 (sd-merge)[1333]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 7 00:52:58.453182 (sd-merge)[1333]: Merged extensions into '/usr'. Mar 7 00:52:58.470050 systemd[1]: Reloading requested from client PID 1319 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:52:58.470065 systemd[1]: Reloading... Mar 7 00:52:58.554559 zram_generator::config[1361]: No configuration found. Mar 7 00:52:58.661988 ldconfig[1316]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:52:58.692993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:52:58.752687 systemd[1]: Reloading finished in 282 ms. Mar 7 00:52:58.772796 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:52:58.773835 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:52:58.785887 systemd[1]: Starting ensure-sysext.service... Mar 7 00:52:58.790799 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:52:58.795340 systemd[1]: Reloading requested from client PID 1405 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:52:58.795568 systemd[1]: Reloading... Mar 7 00:52:58.813579 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:52:58.813861 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:52:58.814514 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:52:58.814760 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Mar 7 00:52:58.814812 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Mar 7 00:52:58.817631 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:52:58.817640 systemd-tmpfiles[1406]: Skipping /boot Mar 7 00:52:58.825714 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:52:58.825725 systemd-tmpfiles[1406]: Skipping /boot Mar 7 00:52:58.883561 zram_generator::config[1435]: No configuration found. Mar 7 00:52:58.915942 systemd-networkd[1246]: eth0: Gained IPv6LL Mar 7 00:52:59.008899 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:52:59.069966 systemd[1]: Reloading finished in 273 ms. Mar 7 00:52:59.088571 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:52:59.089846 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:59.107796 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:52:59.111732 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:52:59.122771 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:52:59.127723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:52:59.131713 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:52:59.145964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:59.150461 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:59.155454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:59.169592 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:59.170611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:59.176216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:59.176782 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:59.181682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:59.187775 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:52:59.188480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:59.188984 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:52:59.195346 systemd[1]: Finished ensure-sysext.service. Mar 7 00:52:59.201989 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:52:59.205938 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:59.206104 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:59.215194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:59.215393 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:59.219043 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:59.219230 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:59.220697 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:52:59.222046 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:52:59.224326 augenrules[1513]: No rules Mar 7 00:52:59.232475 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:52:59.237320 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:59.237407 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:59.244723 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 00:52:59.250246 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:52:59.259065 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:52:59.259895 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:52:59.275034 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:52:59.288583 systemd-resolved[1491]: Positive Trust Anchors: Mar 7 00:52:59.288937 systemd-resolved[1491]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:52:59.289022 systemd-resolved[1491]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:52:59.293997 systemd-resolved[1491]: Using system hostname 'ci-4081-3-6-n-4bed64c074'. Mar 7 00:52:59.296443 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:52:59.297248 systemd[1]: Reached target network.target - Network. Mar 7 00:52:59.299608 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:52:59.299819 systemd-networkd[1246]: eth1: Gained IPv6LL Mar 7 00:52:59.300383 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:59.319183 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 00:52:59.320128 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:52:59.321399 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:52:59.322412 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:52:59.323403 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:52:59.324325 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:52:59.324353 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:52:59.324892 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:52:59.325654 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:52:59.326323 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:52:59.327063 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:52:59.328158 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:52:59.330723 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:52:59.332824 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:52:59.337420 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:52:59.338194 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:52:59.339406 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:52:59.341027 systemd[1]: System is tainted: cgroupsv1 Mar 7 00:52:59.341095 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:52:59.341135 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:52:59.344669 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:52:59.348049 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:52:59.360144 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:52:59.366669 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:52:59.371891 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:52:59.375064 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:52:59.377836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:52:59.383879 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:52:59.392510 jq[1542]: false Mar 7 00:52:59.393731 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:52:59.398348 coreos-metadata[1538]: Mar 07 00:52:59.398 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 7 00:52:59.402070 coreos-metadata[1538]: Mar 07 00:52:59.401 INFO Fetch successful Mar 7 00:52:59.402070 coreos-metadata[1538]: Mar 07 00:52:59.401 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 7 00:52:59.404342 coreos-metadata[1538]: Mar 07 00:52:59.402 INFO Fetch successful Mar 7 00:52:59.412705 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:52:59.419193 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 7 00:52:59.427739 dbus-daemon[1539]: [system] SELinux support is enabled Mar 7 00:52:59.428697 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:52:59.441178 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:52:59.451090 extend-filesystems[1543]: Found loop4 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found loop5 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found loop6 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found loop7 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda1 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda2 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda3 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found usr Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda4 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda6 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda7 Mar 7 00:52:59.454977 extend-filesystems[1543]: Found sda9 Mar 7 00:52:59.454977 extend-filesystems[1543]: Checking size of /dev/sda9 Mar 7 00:52:59.453003 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:52:59.455047 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:52:59.461833 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:52:59.497014 extend-filesystems[1543]: Resized partition /dev/sda9 Mar 7 00:52:59.463250 systemd-timesyncd[1528]: Contacted time server 185.252.140.126:123 (0.flatcar.pool.ntp.org). Mar 7 00:52:59.463324 systemd-timesyncd[1528]: Initial clock synchronization to Sat 2026-03-07 00:52:59.660701 UTC. Mar 7 00:52:59.480832 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:52:59.491204 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:52:59.498896 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:52:59.511931 jq[1573]: true Mar 7 00:52:59.512098 extend-filesystems[1584]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:52:59.499141 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:52:59.531147 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 7 00:52:59.504001 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:52:59.504235 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:52:59.510164 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:52:59.527004 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:52:59.527233 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:52:59.551836 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:52:59.568490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:52:59.571750 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:52:59.574010 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:52:59.574037 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:52:59.581604 jq[1592]: true Mar 7 00:52:59.592037 update_engine[1569]: I20260307 00:52:59.581740 1569 main.cc:92] Flatcar Update Engine starting Mar 7 00:52:59.594813 tar[1589]: linux-arm64/LICENSE Mar 7 00:52:59.594813 tar[1589]: linux-arm64/helm Mar 7 00:52:59.620855 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:52:59.623365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:52:59.625434 update_engine[1569]: I20260307 00:52:59.624912 1569 update_check_scheduler.cc:74] Next update check in 4m16s Mar 7 00:52:59.627705 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:52:59.682558 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1247) Mar 7 00:52:59.690652 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:52:59.697560 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:52:59.710700 systemd[1]: Starting sshkeys.service... Mar 7 00:52:59.740956 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:52:59.743836 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:52:59.752211 systemd-logind[1562]: New seat seat0. Mar 7 00:52:59.760229 systemd-logind[1562]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:52:59.765112 systemd-logind[1562]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 7 00:52:59.766424 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:52:59.778294 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:52:59.792021 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:52:59.809653 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 7 00:52:59.822051 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:52:59.829981 extend-filesystems[1584]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 00:52:59.829981 extend-filesystems[1584]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 7 00:52:59.829981 extend-filesystems[1584]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 7 00:52:59.840692 extend-filesystems[1543]: Resized filesystem in /dev/sda9 Mar 7 00:52:59.840692 extend-filesystems[1543]: Found sr0 Mar 7 00:52:59.831398 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:52:59.832423 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:52:59.854354 coreos-metadata[1643]: Mar 07 00:52:59.854 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 7 00:52:59.858596 coreos-metadata[1643]: Mar 07 00:52:59.858 INFO Fetch successful Mar 7 00:52:59.864434 unknown[1643]: wrote ssh authorized keys file for user: core Mar 7 00:52:59.901250 update-ssh-keys[1655]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:52:59.906748 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:52:59.912273 systemd[1]: Finished sshkeys.service. Mar 7 00:53:00.088088 containerd[1593]: time="2026-03-07T00:53:00.087690240Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:53:00.162024 containerd[1593]: time="2026-03-07T00:53:00.158276418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163293 containerd[1593]: time="2026-03-07T00:53:00.163240806Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163293 containerd[1593]: time="2026-03-07T00:53:00.163286179Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:53:00.163373 containerd[1593]: time="2026-03-07T00:53:00.163306182Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:53:00.163506 containerd[1593]: time="2026-03-07T00:53:00.163482143Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:53:00.163545 containerd[1593]: time="2026-03-07T00:53:00.163505752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163624 containerd[1593]: time="2026-03-07T00:53:00.163603714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163624 containerd[1593]: time="2026-03-07T00:53:00.163621175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163858 containerd[1593]: time="2026-03-07T00:53:00.163835255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163858 containerd[1593]: time="2026-03-07T00:53:00.163856528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163906 containerd[1593]: time="2026-03-07T00:53:00.163870382Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163906 containerd[1593]: time="2026-03-07T00:53:00.163880752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.163988 containerd[1593]: time="2026-03-07T00:53:00.163966376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.164191 containerd[1593]: time="2026-03-07T00:53:00.164169512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:53:00.164370 containerd[1593]: time="2026-03-07T00:53:00.164346417Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:53:00.164370 containerd[1593]: time="2026-03-07T00:53:00.164368345Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:53:00.164472 containerd[1593]: time="2026-03-07T00:53:00.164453273Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:53:00.164528 containerd[1593]: time="2026-03-07T00:53:00.164512090Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.175692476Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.175799168Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.175818802Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.175838968Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.175854297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176040383Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176384764Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176485964Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176617822Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176639013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176653523Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176668975Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176682952Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177594 containerd[1593]: time="2026-03-07T00:53:00.176698732Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176713775Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176727752Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176740540Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176754189Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176776446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176790915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176803621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176821615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176834690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176847724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176859037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176873300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176886417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.177935 containerd[1593]: time="2026-03-07T00:53:00.176904287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.176916584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.176929044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.176941750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.176958105Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.176980279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.176992535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177005036Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177120048Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177138206Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177152224Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177164438Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177173620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177187064Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:53:00.178218 containerd[1593]: time="2026-03-07T00:53:00.177197598Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:53:00.178461 containerd[1593]: time="2026-03-07T00:53:00.177207312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:53:00.182446 containerd[1593]: time="2026-03-07T00:53:00.181456623Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:53:00.182446 containerd[1593]: time="2026-03-07T00:53:00.181571922Z" level=info msg="Connect containerd service" Mar 7 00:53:00.182446 containerd[1593]: time="2026-03-07T00:53:00.181732308Z" level=info msg="using legacy CRI server" Mar 7 00:53:00.182446 containerd[1593]: time="2026-03-07T00:53:00.181745137Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:53:00.182446 containerd[1593]: time="2026-03-07T00:53:00.181872569Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:53:00.185027 containerd[1593]: time="2026-03-07T00:53:00.184990446Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:53:00.186996 containerd[1593]: time="2026-03-07T00:53:00.186943116Z" level=info msg="Start subscribing containerd event" Mar 7 00:53:00.187051 containerd[1593]: time="2026-03-07T00:53:00.187011607Z" level=info msg="Start recovering state" Mar 7 00:53:00.187112 containerd[1593]: time="2026-03-07T00:53:00.187094854Z" level=info msg="Start event monitor" Mar 7 00:53:00.187138 containerd[1593]: time="2026-03-07T00:53:00.187113626Z" level=info msg="Start snapshots syncer" Mar 7 00:53:00.187138 containerd[1593]: time="2026-03-07T00:53:00.187126087Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:53:00.187138 containerd[1593]: time="2026-03-07T00:53:00.187134858Z" level=info msg="Start streaming server" Mar 7 00:53:00.188164 containerd[1593]: time="2026-03-07T00:53:00.187692131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:53:00.188294 containerd[1593]: time="2026-03-07T00:53:00.188273669Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:53:00.188578 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:53:00.193630 containerd[1593]: time="2026-03-07T00:53:00.190568507Z" level=info msg="containerd successfully booted in 0.109120s" Mar 7 00:53:00.352199 sshd_keygen[1586]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:53:00.404400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:53:00.415977 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:53:00.434052 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:53:00.434440 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:53:00.440844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:53:00.471964 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:53:00.483087 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:53:00.488105 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 00:53:00.489868 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:53:00.561838 tar[1589]: linux-arm64/README.md Mar 7 00:53:00.585117 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:53:00.670834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:00.671615 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:00.673268 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:53:00.674356 systemd[1]: Startup finished in 5.350s (kernel) + 4.439s (userspace) = 9.789s. Mar 7 00:53:01.202192 kubelet[1698]: E0307 00:53:01.202127 1698 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:01.206885 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:01.207139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:11.458706 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:53:11.468210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:11.611802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:11.616253 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:11.663573 kubelet[1723]: E0307 00:53:11.663431 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:11.667426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:11.667717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:21.918773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:53:21.926824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:22.068762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:22.074419 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:22.125491 kubelet[1743]: E0307 00:53:22.125419 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:22.129824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:22.130164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:32.251890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 00:53:32.257764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:32.387813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:32.394353 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:32.432815 kubelet[1763]: E0307 00:53:32.432754 1763 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:32.437781 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:32.437953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:35.618745 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:53:35.623916 systemd[1]: Started sshd@0-188.245.55.131:22-20.161.92.111:51914.service - OpenSSH per-connection server daemon (20.161.92.111:51914). Mar 7 00:53:36.218637 sshd[1771]: Accepted publickey for core from 20.161.92.111 port 51914 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:36.221509 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:36.231524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:53:36.236823 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:53:36.240105 systemd-logind[1562]: New session 1 of user core. Mar 7 00:53:36.252760 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:53:36.262206 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:53:36.271152 (systemd)[1777]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:53:36.387857 systemd[1777]: Queued start job for default target default.target. Mar 7 00:53:36.389063 systemd[1777]: Created slice app.slice - User Application Slice. Mar 7 00:53:36.389239 systemd[1777]: Reached target paths.target - Paths. Mar 7 00:53:36.389323 systemd[1777]: Reached target timers.target - Timers. Mar 7 00:53:36.402805 systemd[1777]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:53:36.414918 systemd[1777]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:53:36.414998 systemd[1777]: Reached target sockets.target - Sockets. Mar 7 00:53:36.415012 systemd[1777]: Reached target basic.target - Basic System. Mar 7 00:53:36.415066 systemd[1777]: Reached target default.target - Main User Target. Mar 7 00:53:36.415094 systemd[1777]: Startup finished in 137ms. Mar 7 00:53:36.415780 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:53:36.422136 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:53:36.851980 systemd[1]: Started sshd@1-188.245.55.131:22-20.161.92.111:51930.service - OpenSSH per-connection server daemon (20.161.92.111:51930). Mar 7 00:53:37.441571 sshd[1789]: Accepted publickey for core from 20.161.92.111 port 51930 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:37.442777 sshd[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:37.448285 systemd-logind[1562]: New session 2 of user core. Mar 7 00:53:37.456145 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:53:37.863511 sshd[1789]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:37.868287 systemd[1]: sshd@1-188.245.55.131:22-20.161.92.111:51930.service: Deactivated successfully. Mar 7 00:53:37.872150 systemd-logind[1562]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:53:37.873249 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:53:37.874205 systemd-logind[1562]: Removed session 2. Mar 7 00:53:37.968222 systemd[1]: Started sshd@2-188.245.55.131:22-20.161.92.111:51944.service - OpenSSH per-connection server daemon (20.161.92.111:51944). Mar 7 00:53:38.556090 sshd[1797]: Accepted publickey for core from 20.161.92.111 port 51944 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:38.557694 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:38.562426 systemd-logind[1562]: New session 3 of user core. Mar 7 00:53:38.571154 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:53:38.970845 sshd[1797]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:38.975634 systemd-logind[1562]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:53:38.976996 systemd[1]: sshd@2-188.245.55.131:22-20.161.92.111:51944.service: Deactivated successfully. Mar 7 00:53:38.980786 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:53:38.982328 systemd-logind[1562]: Removed session 3. Mar 7 00:53:39.077318 systemd[1]: Started sshd@3-188.245.55.131:22-20.161.92.111:51948.service - OpenSSH per-connection server daemon (20.161.92.111:51948). Mar 7 00:53:39.661003 sshd[1805]: Accepted publickey for core from 20.161.92.111 port 51948 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:39.663215 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:39.667991 systemd-logind[1562]: New session 4 of user core. Mar 7 00:53:39.678987 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:53:40.081936 sshd[1805]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:40.088256 systemd[1]: sshd@3-188.245.55.131:22-20.161.92.111:51948.service: Deactivated successfully. Mar 7 00:53:40.092303 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:53:40.093440 systemd-logind[1562]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:53:40.095009 systemd-logind[1562]: Removed session 4. Mar 7 00:53:40.181913 systemd[1]: Started sshd@4-188.245.55.131:22-20.161.92.111:57382.service - OpenSSH per-connection server daemon (20.161.92.111:57382). Mar 7 00:53:40.768595 sshd[1813]: Accepted publickey for core from 20.161.92.111 port 57382 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:40.769956 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:40.778196 systemd-logind[1562]: New session 5 of user core. Mar 7 00:53:40.790071 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:53:41.105180 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:53:41.105462 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:41.125555 sudo[1817]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:41.221030 sshd[1813]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:41.228383 systemd[1]: sshd@4-188.245.55.131:22-20.161.92.111:57382.service: Deactivated successfully. Mar 7 00:53:41.229262 systemd-logind[1562]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:53:41.232064 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:53:41.233122 systemd-logind[1562]: Removed session 5. Mar 7 00:53:41.329954 systemd[1]: Started sshd@5-188.245.55.131:22-20.161.92.111:57398.service - OpenSSH per-connection server daemon (20.161.92.111:57398). Mar 7 00:53:41.927579 sshd[1822]: Accepted publickey for core from 20.161.92.111 port 57398 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:41.929699 sshd[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:41.936772 systemd-logind[1562]: New session 6 of user core. Mar 7 00:53:41.943900 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:53:42.258373 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:53:42.258732 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:42.263739 sudo[1827]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:42.269821 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:53:42.270128 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:42.285915 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:42.298669 auditctl[1830]: No rules Mar 7 00:53:42.300260 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:53:42.301980 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:42.309467 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:42.339304 augenrules[1849]: No rules Mar 7 00:53:42.342123 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:42.344923 sudo[1826]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:42.440998 sshd[1822]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:42.445551 systemd[1]: sshd@5-188.245.55.131:22-20.161.92.111:57398.service: Deactivated successfully. Mar 7 00:53:42.447420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 00:53:42.448959 systemd-logind[1562]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:53:42.449758 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:53:42.453928 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:42.455669 systemd-logind[1562]: Removed session 6. Mar 7 00:53:42.544963 systemd[1]: Started sshd@6-188.245.55.131:22-20.161.92.111:57404.service - OpenSSH per-connection server daemon (20.161.92.111:57404). Mar 7 00:53:42.612895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:42.626309 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:42.667409 kubelet[1872]: E0307 00:53:42.667331 1872 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:42.670174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:42.670904 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:43.129685 sshd[1862]: Accepted publickey for core from 20.161.92.111 port 57404 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:43.131588 sshd[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:43.136748 systemd-logind[1562]: New session 7 of user core. Mar 7 00:53:43.145930 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:53:43.454113 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:53:43.454426 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:43.758024 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:53:43.766400 (dockerd)[1897]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:53:44.014374 dockerd[1897]: time="2026-03-07T00:53:44.014192513Z" level=info msg="Starting up" Mar 7 00:53:44.095569 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport56768180-merged.mount: Deactivated successfully. Mar 7 00:53:44.122681 dockerd[1897]: time="2026-03-07T00:53:44.122579421Z" level=info msg="Loading containers: start." Mar 7 00:53:44.228601 kernel: Initializing XFRM netlink socket Mar 7 00:53:44.310285 systemd-networkd[1246]: docker0: Link UP Mar 7 00:53:44.332169 dockerd[1897]: time="2026-03-07T00:53:44.332035647Z" level=info msg="Loading containers: done." Mar 7 00:53:44.352924 dockerd[1897]: time="2026-03-07T00:53:44.352845271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:53:44.353157 dockerd[1897]: time="2026-03-07T00:53:44.352999857Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:53:44.353157 dockerd[1897]: time="2026-03-07T00:53:44.353138599Z" level=info msg="Daemon has completed initialization" Mar 7 00:53:44.401466 dockerd[1897]: time="2026-03-07T00:53:44.401069835Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:53:44.402384 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:53:44.881854 containerd[1593]: time="2026-03-07T00:53:44.881796019Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 00:53:44.885691 update_engine[1569]: I20260307 00:53:44.885375 1569 update_attempter.cc:509] Updating boot flags... Mar 7 00:53:44.957570 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1929) Mar 7 00:53:45.018556 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1923) Mar 7 00:53:45.091167 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4063332909-merged.mount: Deactivated successfully. Mar 7 00:53:45.601659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673838127.mount: Deactivated successfully. Mar 7 00:53:46.577989 containerd[1593]: time="2026-03-07T00:53:46.577011447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:46.578362 containerd[1593]: time="2026-03-07T00:53:46.578261671Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390272" Mar 7 00:53:46.579561 containerd[1593]: time="2026-03-07T00:53:46.578908806Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:46.582202 containerd[1593]: time="2026-03-07T00:53:46.582164525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:46.584519 containerd[1593]: time="2026-03-07T00:53:46.584447741Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 1.702598193s" Mar 7 00:53:46.584748 containerd[1593]: time="2026-03-07T00:53:46.584714460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 00:53:46.585597 containerd[1593]: time="2026-03-07T00:53:46.585560945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 00:53:47.804958 containerd[1593]: time="2026-03-07T00:53:47.804885040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:47.806550 containerd[1593]: time="2026-03-07T00:53:47.806385491Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552126" Mar 7 00:53:47.807837 containerd[1593]: time="2026-03-07T00:53:47.807782486Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:47.811481 containerd[1593]: time="2026-03-07T00:53:47.811399393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:47.814424 containerd[1593]: time="2026-03-07T00:53:47.813897023Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.22828139s" Mar 7 00:53:47.814424 containerd[1593]: time="2026-03-07T00:53:47.813970474Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 00:53:47.814924 containerd[1593]: time="2026-03-07T00:53:47.814789348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 00:53:48.918429 containerd[1593]: time="2026-03-07T00:53:48.918362835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:48.919559 containerd[1593]: time="2026-03-07T00:53:48.919514069Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301325" Mar 7 00:53:48.921570 containerd[1593]: time="2026-03-07T00:53:48.920614175Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:48.925484 containerd[1593]: time="2026-03-07T00:53:48.925418617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:48.929683 containerd[1593]: time="2026-03-07T00:53:48.928175185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.113342111s" Mar 7 00:53:48.929683 containerd[1593]: time="2026-03-07T00:53:48.928241954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 00:53:48.930060 containerd[1593]: time="2026-03-07T00:53:48.929958943Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 00:53:49.941768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851306995.mount: Deactivated successfully. Mar 7 00:53:50.311068 containerd[1593]: time="2026-03-07T00:53:50.309836684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:50.311711 containerd[1593]: time="2026-03-07T00:53:50.311674107Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148896" Mar 7 00:53:50.312271 containerd[1593]: time="2026-03-07T00:53:50.312245977Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:50.314995 containerd[1593]: time="2026-03-07T00:53:50.314951826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:50.316556 containerd[1593]: time="2026-03-07T00:53:50.316493853Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.386497345s" Mar 7 00:53:50.316556 containerd[1593]: time="2026-03-07T00:53:50.316544819Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 00:53:50.317402 containerd[1593]: time="2026-03-07T00:53:50.317382721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 00:53:50.931190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300388367.mount: Deactivated successfully. Mar 7 00:53:51.776824 containerd[1593]: time="2026-03-07T00:53:51.776770559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:51.779070 containerd[1593]: time="2026-03-07T00:53:51.778677140Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Mar 7 00:53:51.779070 containerd[1593]: time="2026-03-07T00:53:51.779011899Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:51.786366 containerd[1593]: time="2026-03-07T00:53:51.786304866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:51.789376 containerd[1593]: time="2026-03-07T00:53:51.789328817Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.471834563s" Mar 7 00:53:51.789376 containerd[1593]: time="2026-03-07T00:53:51.789378623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 00:53:51.790188 containerd[1593]: time="2026-03-07T00:53:51.789866960Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 00:53:52.313061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255875529.mount: Deactivated successfully. Mar 7 00:53:52.319954 containerd[1593]: time="2026-03-07T00:53:52.319895970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:52.322010 containerd[1593]: time="2026-03-07T00:53:52.321941037Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 7 00:53:52.323860 containerd[1593]: time="2026-03-07T00:53:52.323780002Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:52.327569 containerd[1593]: time="2026-03-07T00:53:52.327446569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:52.329083 containerd[1593]: time="2026-03-07T00:53:52.328369551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 538.474428ms" Mar 7 00:53:52.329083 containerd[1593]: time="2026-03-07T00:53:52.328411516Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 00:53:52.329083 containerd[1593]: time="2026-03-07T00:53:52.328822401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 00:53:52.751878 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 7 00:53:52.760240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:52.913854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:52.926980 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:52.966619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223879274.mount: Deactivated successfully. Mar 7 00:53:52.974012 kubelet[2195]: E0307 00:53:52.973964 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:52.978808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:52.979446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:53.676257 containerd[1593]: time="2026-03-07T00:53:53.676201165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:53.677435 containerd[1593]: time="2026-03-07T00:53:53.677399173Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Mar 7 00:53:53.678402 containerd[1593]: time="2026-03-07T00:53:53.678350914Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:53.685731 containerd[1593]: time="2026-03-07T00:53:53.685585963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:53.687887 containerd[1593]: time="2026-03-07T00:53:53.687854044Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.359002839s" Mar 7 00:53:53.688094 containerd[1593]: time="2026-03-07T00:53:53.687986418Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 00:53:59.479158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:59.493387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:59.529359 systemd[1]: Reloading requested from client PID 2295 ('systemctl') (unit session-7.scope)... Mar 7 00:53:59.529379 systemd[1]: Reloading... Mar 7 00:53:59.646561 zram_generator::config[2348]: No configuration found. Mar 7 00:53:59.741032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:59.810426 systemd[1]: Reloading finished in 280 ms. Mar 7 00:53:59.861473 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:53:59.861715 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:53:59.862136 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:59.870008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:59.995732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:59.997572 (kubelet)[2395]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:54:00.039566 kubelet[2395]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:00.039566 kubelet[2395]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:54:00.039566 kubelet[2395]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:00.039566 kubelet[2395]: I0307 00:54:00.038994 2395 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:54:01.088492 kubelet[2395]: I0307 00:54:01.088432 2395 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:54:01.088492 kubelet[2395]: I0307 00:54:01.088470 2395 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:54:01.089080 kubelet[2395]: I0307 00:54:01.088765 2395 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:54:01.116562 kubelet[2395]: E0307 00:54:01.115744 2395 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://188.245.55.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:54:01.117705 kubelet[2395]: I0307 00:54:01.117400 2395 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:54:01.132874 kubelet[2395]: E0307 00:54:01.132760 2395 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:54:01.132874 kubelet[2395]: I0307 00:54:01.132837 2395 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:54:01.138124 kubelet[2395]: I0307 00:54:01.138060 2395 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:54:01.139874 kubelet[2395]: I0307 00:54:01.139797 2395 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:54:01.140067 kubelet[2395]: I0307 00:54:01.139851 2395 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-4bed64c074","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 00:54:01.140067 kubelet[2395]: I0307 00:54:01.140047 2395 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:54:01.140067 kubelet[2395]: I0307 00:54:01.140058 2395 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:54:01.140303 kubelet[2395]: I0307 00:54:01.140263 2395 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:01.143738 kubelet[2395]: I0307 00:54:01.143676 2395 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:54:01.143858 kubelet[2395]: I0307 00:54:01.143834 2395 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:54:01.143933 kubelet[2395]: I0307 00:54:01.143875 2395 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:54:01.143933 kubelet[2395]: I0307 00:54:01.143893 2395 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:54:01.149562 kubelet[2395]: E0307 00:54:01.148719 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://188.245.55.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-4bed64c074&limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:54:01.149562 kubelet[2395]: E0307 00:54:01.149197 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.55.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:54:01.151576 kubelet[2395]: I0307 00:54:01.150048 2395 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:54:01.151576 kubelet[2395]: I0307 00:54:01.150724 2395 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:54:01.151576 kubelet[2395]: W0307 00:54:01.150848 2395 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:54:01.153790 kubelet[2395]: I0307 00:54:01.153774 2395 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:54:01.153945 kubelet[2395]: I0307 00:54:01.153933 2395 server.go:1289] "Started kubelet" Mar 7 00:54:01.159238 kubelet[2395]: I0307 00:54:01.159212 2395 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:54:01.160863 kubelet[2395]: E0307 00:54:01.159563 2395 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.55.131:6443/api/v1/namespaces/default/events\": dial tcp 188.245.55.131:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-4bed64c074.189a68fd76dc732f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-4bed64c074,UID:ci-4081-3-6-n-4bed64c074,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-4bed64c074,},FirstTimestamp:2026-03-07 00:54:01.153884975 +0000 UTC m=+1.150957660,LastTimestamp:2026-03-07 00:54:01.153884975 +0000 UTC m=+1.150957660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-4bed64c074,}" Mar 7 00:54:01.161842 kubelet[2395]: I0307 00:54:01.161797 2395 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:54:01.163097 kubelet[2395]: I0307 00:54:01.163064 2395 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:54:01.166307 kubelet[2395]: I0307 00:54:01.166235 2395 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:54:01.166488 kubelet[2395]: I0307 00:54:01.166464 2395 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:54:01.166734 kubelet[2395]: I0307 00:54:01.166710 2395 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:54:01.168008 kubelet[2395]: I0307 00:54:01.167989 2395 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:54:01.168341 kubelet[2395]: E0307 00:54:01.168317 2395 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4bed64c074\" not found" Mar 7 00:54:01.169874 kubelet[2395]: E0307 00:54:01.169819 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4bed64c074?timeout=10s\": dial tcp 188.245.55.131:6443: connect: connection refused" interval="200ms" Mar 7 00:54:01.170683 kubelet[2395]: I0307 00:54:01.170653 2395 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:54:01.170770 kubelet[2395]: I0307 00:54:01.170748 2395 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:54:01.171945 kubelet[2395]: E0307 00:54:01.171874 2395 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:54:01.172114 kubelet[2395]: I0307 00:54:01.172097 2395 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:54:01.172218 kubelet[2395]: I0307 00:54:01.172208 2395 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:54:01.172628 kubelet[2395]: I0307 00:54:01.172505 2395 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:54:01.197234 kubelet[2395]: I0307 00:54:01.197139 2395 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:54:01.198853 kubelet[2395]: I0307 00:54:01.198790 2395 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:54:01.198853 kubelet[2395]: I0307 00:54:01.198841 2395 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:54:01.198992 kubelet[2395]: I0307 00:54:01.198869 2395 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:54:01.198992 kubelet[2395]: I0307 00:54:01.198878 2395 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:54:01.198992 kubelet[2395]: E0307 00:54:01.198959 2395 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:54:01.214198 kubelet[2395]: E0307 00:54:01.214097 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://188.245.55.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:54:01.216103 kubelet[2395]: E0307 00:54:01.216038 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.55.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:54:01.221869 kubelet[2395]: I0307 00:54:01.221827 2395 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:54:01.222029 kubelet[2395]: I0307 00:54:01.221933 2395 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:54:01.222029 kubelet[2395]: I0307 00:54:01.222007 2395 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:01.224881 kubelet[2395]: I0307 00:54:01.224827 2395 policy_none.go:49] "None policy: Start" Mar 7 00:54:01.224881 kubelet[2395]: I0307 00:54:01.224860 2395 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:54:01.224881 kubelet[2395]: I0307 00:54:01.224872 2395 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:54:01.230392 kubelet[2395]: E0307 00:54:01.230341 2395 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:54:01.232124 kubelet[2395]: I0307 00:54:01.232098 2395 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:54:01.232202 kubelet[2395]: I0307 00:54:01.232122 2395 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:54:01.232573 kubelet[2395]: I0307 00:54:01.232431 2395 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:54:01.233472 kubelet[2395]: E0307 00:54:01.233446 2395 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:54:01.233576 kubelet[2395]: E0307 00:54:01.233486 2395 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-4bed64c074\" not found" Mar 7 00:54:01.311124 kubelet[2395]: E0307 00:54:01.311064 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.319490 kubelet[2395]: E0307 00:54:01.319381 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.322846 kubelet[2395]: E0307 00:54:01.322793 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.336679 kubelet[2395]: I0307 00:54:01.335864 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.336679 kubelet[2395]: E0307 00:54:01.336458 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.55.131:6443/api/v1/nodes\": dial tcp 188.245.55.131:6443: connect: connection refused" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.371148 kubelet[2395]: E0307 00:54:01.371003 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4bed64c074?timeout=10s\": dial tcp 188.245.55.131:6443: connect: connection refused" interval="400ms" Mar 7 00:54:01.473500 kubelet[2395]: I0307 00:54:01.473146 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4f08f49f3a779af79aa3a0d525e9e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" (UID: \"ecc4f08f49f3a779af79aa3a0d525e9e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473500 kubelet[2395]: I0307 00:54:01.473203 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4f08f49f3a779af79aa3a0d525e9e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" (UID: \"ecc4f08f49f3a779af79aa3a0d525e9e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473500 kubelet[2395]: I0307 00:54:01.473234 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecc4f08f49f3a779af79aa3a0d525e9e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" (UID: \"ecc4f08f49f3a779af79aa3a0d525e9e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473500 kubelet[2395]: I0307 00:54:01.473271 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473500 kubelet[2395]: I0307 00:54:01.473303 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473862 kubelet[2395]: I0307 00:54:01.473332 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473862 kubelet[2395]: I0307 00:54:01.473359 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2969ab8b1d026091b3d6fed086b85208-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-4bed64c074\" (UID: \"2969ab8b1d026091b3d6fed086b85208\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473862 kubelet[2395]: I0307 00:54:01.473381 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.473862 kubelet[2395]: I0307 00:54:01.473418 2395 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.539222 kubelet[2395]: I0307 00:54:01.538785 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.539222 kubelet[2395]: E0307 00:54:01.539135 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.55.131:6443/api/v1/nodes\": dial tcp 188.245.55.131:6443: connect: connection refused" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.614587 containerd[1593]: time="2026-03-07T00:54:01.614051317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-4bed64c074,Uid:6dd01621514b4a3349b6a288f39a5369,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:01.621322 containerd[1593]: time="2026-03-07T00:54:01.621204071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-4bed64c074,Uid:2969ab8b1d026091b3d6fed086b85208,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:01.624369 containerd[1593]: time="2026-03-07T00:54:01.624033090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-4bed64c074,Uid:ecc4f08f49f3a779af79aa3a0d525e9e,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:01.772052 kubelet[2395]: E0307 00:54:01.771975 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4bed64c074?timeout=10s\": dial tcp 188.245.55.131:6443: connect: connection refused" interval="800ms" Mar 7 00:54:01.942023 kubelet[2395]: I0307 00:54:01.941914 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.942388 kubelet[2395]: E0307 00:54:01.942365 2395 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.55.131:6443/api/v1/nodes\": dial tcp 188.245.55.131:6443: connect: connection refused" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:01.988827 kubelet[2395]: E0307 00:54:01.988763 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://188.245.55.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-4bed64c074&limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:54:02.187806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount224056235.mount: Deactivated successfully. Mar 7 00:54:02.197297 containerd[1593]: time="2026-03-07T00:54:02.197144059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:02.198592 containerd[1593]: time="2026-03-07T00:54:02.198499280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:02.199848 containerd[1593]: time="2026-03-07T00:54:02.199782776Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 7 00:54:02.200597 containerd[1593]: time="2026-03-07T00:54:02.200555314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:54:02.202558 containerd[1593]: time="2026-03-07T00:54:02.201711520Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:02.202641 containerd[1593]: time="2026-03-07T00:54:02.202609387Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:54:02.203723 containerd[1593]: time="2026-03-07T00:54:02.203411847Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:02.209946 containerd[1593]: time="2026-03-07T00:54:02.209884171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:54:02.212347 containerd[1593]: time="2026-03-07T00:54:02.212283830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 590.992713ms" Mar 7 00:54:02.219178 containerd[1593]: time="2026-03-07T00:54:02.219121782Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.982924ms" Mar 7 00:54:02.222726 containerd[1593]: time="2026-03-07T00:54:02.221957153Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.773426ms" Mar 7 00:54:02.288647 kubelet[2395]: E0307 00:54:02.288603 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.55.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:54:02.354577 containerd[1593]: time="2026-03-07T00:54:02.354286564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:02.354577 containerd[1593]: time="2026-03-07T00:54:02.354336488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:02.354577 containerd[1593]: time="2026-03-07T00:54:02.354399213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:02.354577 containerd[1593]: time="2026-03-07T00:54:02.354419854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:02.354577 containerd[1593]: time="2026-03-07T00:54:02.354539983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:02.354577 containerd[1593]: time="2026-03-07T00:54:02.354542703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:02.354809 containerd[1593]: time="2026-03-07T00:54:02.354676393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:02.355350 containerd[1593]: time="2026-03-07T00:54:02.355267357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:02.356010 containerd[1593]: time="2026-03-07T00:54:02.355778196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:02.356010 containerd[1593]: time="2026-03-07T00:54:02.355900005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:02.356010 containerd[1593]: time="2026-03-07T00:54:02.355924326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:02.356690 containerd[1593]: time="2026-03-07T00:54:02.356089259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:02.432796 containerd[1593]: time="2026-03-07T00:54:02.432757149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-4bed64c074,Uid:ecc4f08f49f3a779af79aa3a0d525e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2810f3d94b3c6a87dd79f029ee89ece5c0676ed10e635609c5c2e6447ee1566\"" Mar 7 00:54:02.439679 containerd[1593]: time="2026-03-07T00:54:02.438791480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-4bed64c074,Uid:6dd01621514b4a3349b6a288f39a5369,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb76bf62440752e3ceda19f6f74eeccec6c69fcb820a2702e6309b010058604b\"" Mar 7 00:54:02.440378 containerd[1593]: time="2026-03-07T00:54:02.440334556Z" level=info msg="CreateContainer within sandbox \"e2810f3d94b3c6a87dd79f029ee89ece5c0676ed10e635609c5c2e6447ee1566\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:54:02.445432 containerd[1593]: time="2026-03-07T00:54:02.445388973Z" level=info msg="CreateContainer within sandbox \"eb76bf62440752e3ceda19f6f74eeccec6c69fcb820a2702e6309b010058604b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:54:02.448192 containerd[1593]: time="2026-03-07T00:54:02.447369401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-4bed64c074,Uid:2969ab8b1d026091b3d6fed086b85208,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ebcd748f1fc68c3f78a753d5a149cde22e11a75a261a3654139db3f9302aba2\"" Mar 7 00:54:02.454065 containerd[1593]: time="2026-03-07T00:54:02.453622549Z" level=info msg="CreateContainer within sandbox \"1ebcd748f1fc68c3f78a753d5a149cde22e11a75a261a3654139db3f9302aba2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:54:02.466176 containerd[1593]: time="2026-03-07T00:54:02.466100521Z" level=info msg="CreateContainer within sandbox \"e2810f3d94b3c6a87dd79f029ee89ece5c0676ed10e635609c5c2e6447ee1566\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d11d03e9d5486633a6b8dcb5d042d286c3bd5911e39bec2945cb8d525c098d91\"" Mar 7 00:54:02.468550 containerd[1593]: time="2026-03-07T00:54:02.467297051Z" level=info msg="StartContainer for \"d11d03e9d5486633a6b8dcb5d042d286c3bd5911e39bec2945cb8d525c098d91\"" Mar 7 00:54:02.472417 containerd[1593]: time="2026-03-07T00:54:02.472371670Z" level=info msg="CreateContainer within sandbox \"1ebcd748f1fc68c3f78a753d5a149cde22e11a75a261a3654139db3f9302aba2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477\"" Mar 7 00:54:02.473385 containerd[1593]: time="2026-03-07T00:54:02.473349143Z" level=info msg="CreateContainer within sandbox \"eb76bf62440752e3ceda19f6f74eeccec6c69fcb820a2702e6309b010058604b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a\"" Mar 7 00:54:02.473866 containerd[1593]: time="2026-03-07T00:54:02.473835019Z" level=info msg="StartContainer for \"5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477\"" Mar 7 00:54:02.474351 containerd[1593]: time="2026-03-07T00:54:02.474324936Z" level=info msg="StartContainer for \"dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a\"" Mar 7 00:54:02.484481 kubelet[2395]: E0307 00:54:02.484441 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.55.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:54:02.576062 kubelet[2395]: E0307 00:54:02.575987 2395 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.55.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-4bed64c074?timeout=10s\": dial tcp 188.245.55.131:6443: connect: connection refused" interval="1.6s" Mar 7 00:54:02.581030 containerd[1593]: time="2026-03-07T00:54:02.580960666Z" level=info msg="StartContainer for \"d11d03e9d5486633a6b8dcb5d042d286c3bd5911e39bec2945cb8d525c098d91\" returns successfully" Mar 7 00:54:02.587401 containerd[1593]: time="2026-03-07T00:54:02.586545244Z" level=info msg="StartContainer for \"5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477\" returns successfully" Mar 7 00:54:02.593008 containerd[1593]: time="2026-03-07T00:54:02.592922280Z" level=info msg="StartContainer for \"dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a\" returns successfully" Mar 7 00:54:02.631826 kubelet[2395]: E0307 00:54:02.631727 2395 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://188.245.55.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.55.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:54:02.745826 kubelet[2395]: I0307 00:54:02.745713 2395 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:03.236111 kubelet[2395]: E0307 00:54:03.236073 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:03.245961 kubelet[2395]: E0307 00:54:03.245672 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:03.246562 kubelet[2395]: E0307 00:54:03.246194 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.250634 kubelet[2395]: E0307 00:54:04.248472 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.250634 kubelet[2395]: E0307 00:54:04.248840 2395 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-4bed64c074\" not found" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.742164 kubelet[2395]: I0307 00:54:04.742110 2395 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.742461 kubelet[2395]: E0307 00:54:04.742302 2395 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-4bed64c074\": node \"ci-4081-3-6-n-4bed64c074\" not found" Mar 7 00:54:04.764097 kubelet[2395]: E0307 00:54:04.764045 2395 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4bed64c074\" not found" Mar 7 00:54:04.865163 kubelet[2395]: E0307 00:54:04.865089 2395 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-4bed64c074\" not found" Mar 7 00:54:04.970113 kubelet[2395]: I0307 00:54:04.968972 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.977256 kubelet[2395]: E0307 00:54:04.977221 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.977476 kubelet[2395]: I0307 00:54:04.977380 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.980804 kubelet[2395]: E0307 00:54:04.980714 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-4bed64c074\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.980804 kubelet[2395]: I0307 00:54:04.980741 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:04.986926 kubelet[2395]: E0307 00:54:04.986872 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:05.151357 kubelet[2395]: I0307 00:54:05.150035 2395 apiserver.go:52] "Watching apiserver" Mar 7 00:54:05.173319 kubelet[2395]: I0307 00:54:05.173295 2395 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:54:05.248857 kubelet[2395]: I0307 00:54:05.248744 2395 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:05.251389 kubelet[2395]: E0307 00:54:05.251200 2395 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-4bed64c074\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:06.968397 systemd[1]: Reloading requested from client PID 2674 ('systemctl') (unit session-7.scope)... Mar 7 00:54:06.968418 systemd[1]: Reloading... Mar 7 00:54:07.055569 zram_generator::config[2717]: No configuration found. Mar 7 00:54:07.187986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:07.270619 systemd[1]: Reloading finished in 301 ms. Mar 7 00:54:07.298973 kubelet[2395]: I0307 00:54:07.298917 2395 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:54:07.299776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:07.314137 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:54:07.314471 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:07.327838 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:07.474903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:07.484736 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:54:07.538827 kubelet[2769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:07.538827 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:54:07.538827 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:07.538827 kubelet[2769]: I0307 00:54:07.538205 2769 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:54:07.547145 kubelet[2769]: I0307 00:54:07.547079 2769 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:54:07.547145 kubelet[2769]: I0307 00:54:07.547105 2769 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:54:07.547744 kubelet[2769]: I0307 00:54:07.547716 2769 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:54:07.550552 kubelet[2769]: I0307 00:54:07.549329 2769 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:54:07.552158 kubelet[2769]: I0307 00:54:07.552133 2769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:54:07.563061 kubelet[2769]: E0307 00:54:07.563028 2769 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:54:07.563220 kubelet[2769]: I0307 00:54:07.563207 2769 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:54:07.565730 kubelet[2769]: I0307 00:54:07.565707 2769 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:54:07.566331 kubelet[2769]: I0307 00:54:07.566268 2769 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:54:07.566612 kubelet[2769]: I0307 00:54:07.566424 2769 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-4bed64c074","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 00:54:07.566727 kubelet[2769]: I0307 00:54:07.566716 2769 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:54:07.566786 kubelet[2769]: I0307 00:54:07.566778 2769 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:54:07.566885 kubelet[2769]: I0307 00:54:07.566876 2769 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:07.567098 kubelet[2769]: I0307 00:54:07.567089 2769 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:54:07.567655 kubelet[2769]: I0307 00:54:07.567640 2769 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:54:07.567765 kubelet[2769]: I0307 00:54:07.567755 2769 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:54:07.567828 kubelet[2769]: I0307 00:54:07.567819 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:54:07.572174 kubelet[2769]: I0307 00:54:07.572141 2769 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:54:07.572765 kubelet[2769]: I0307 00:54:07.572742 2769 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:54:07.584431 kubelet[2769]: I0307 00:54:07.584405 2769 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:54:07.586875 kubelet[2769]: I0307 00:54:07.586836 2769 server.go:1289] "Started kubelet" Mar 7 00:54:07.589201 kubelet[2769]: I0307 00:54:07.589038 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:54:07.601007 kubelet[2769]: I0307 00:54:07.600967 2769 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:54:07.603309 kubelet[2769]: I0307 00:54:07.603187 2769 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:54:07.608699 kubelet[2769]: E0307 00:54:07.608609 2769 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:54:07.610792 kubelet[2769]: I0307 00:54:07.608804 2769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:54:07.610792 kubelet[2769]: I0307 00:54:07.609019 2769 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:54:07.610792 kubelet[2769]: I0307 00:54:07.609213 2769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:54:07.610792 kubelet[2769]: I0307 00:54:07.609304 2769 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:54:07.613190 kubelet[2769]: I0307 00:54:07.612568 2769 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:54:07.613190 kubelet[2769]: I0307 00:54:07.612672 2769 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:54:07.614975 kubelet[2769]: I0307 00:54:07.614941 2769 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:54:07.616039 kubelet[2769]: I0307 00:54:07.616020 2769 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:54:07.621723 kubelet[2769]: I0307 00:54:07.621520 2769 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:54:07.626793 kubelet[2769]: I0307 00:54:07.626519 2769 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:54:07.626793 kubelet[2769]: I0307 00:54:07.626575 2769 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:54:07.626793 kubelet[2769]: I0307 00:54:07.626600 2769 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:54:07.626793 kubelet[2769]: I0307 00:54:07.626608 2769 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:54:07.626793 kubelet[2769]: E0307 00:54:07.626664 2769 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:54:07.630355 kubelet[2769]: I0307 00:54:07.630307 2769 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:54:07.685880 kubelet[2769]: I0307 00:54:07.685840 2769 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.685997 2769 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686021 2769 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686155 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686164 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686185 2769 policy_none.go:49] "None policy: Start" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686195 2769 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686202 2769 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:54:07.687366 kubelet[2769]: I0307 00:54:07.686280 2769 state_mem.go:75] "Updated machine memory state" Mar 7 00:54:07.688090 kubelet[2769]: E0307 00:54:07.688066 2769 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:54:07.688383 kubelet[2769]: I0307 00:54:07.688363 2769 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:54:07.688480 kubelet[2769]: I0307 00:54:07.688443 2769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:54:07.692070 kubelet[2769]: I0307 00:54:07.692047 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:54:07.696176 kubelet[2769]: E0307 00:54:07.696153 2769 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:54:07.727827 kubelet[2769]: I0307 00:54:07.727763 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.729278 kubelet[2769]: I0307 00:54:07.729220 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.729473 kubelet[2769]: I0307 00:54:07.729447 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.801606 kubelet[2769]: I0307 00:54:07.800219 2769 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.814051 kubelet[2769]: I0307 00:54:07.814024 2769 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.814345 kubelet[2769]: I0307 00:54:07.814289 2769 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.831818 kubelet[2769]: I0307 00:54:07.831780 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832002 kubelet[2769]: I0307 00:54:07.831987 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4f08f49f3a779af79aa3a0d525e9e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" (UID: \"ecc4f08f49f3a779af79aa3a0d525e9e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832096 kubelet[2769]: I0307 00:54:07.832084 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832168 kubelet[2769]: I0307 00:54:07.832157 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832254 kubelet[2769]: I0307 00:54:07.832242 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832347 kubelet[2769]: I0307 00:54:07.832334 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6dd01621514b4a3349b6a288f39a5369-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-4bed64c074\" (UID: \"6dd01621514b4a3349b6a288f39a5369\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832435 kubelet[2769]: I0307 00:54:07.832423 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2969ab8b1d026091b3d6fed086b85208-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-4bed64c074\" (UID: \"2969ab8b1d026091b3d6fed086b85208\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832504 kubelet[2769]: I0307 00:54:07.832493 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecc4f08f49f3a779af79aa3a0d525e9e-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" (UID: \"ecc4f08f49f3a779af79aa3a0d525e9e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:07.832608 kubelet[2769]: I0307 00:54:07.832595 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecc4f08f49f3a779af79aa3a0d525e9e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" (UID: \"ecc4f08f49f3a779af79aa3a0d525e9e\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:08.573004 kubelet[2769]: I0307 00:54:08.572966 2769 apiserver.go:52] "Watching apiserver" Mar 7 00:54:08.617582 kubelet[2769]: I0307 00:54:08.616860 2769 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:54:08.662236 kubelet[2769]: I0307 00:54:08.661490 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:08.671423 kubelet[2769]: E0307 00:54:08.671324 2769 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-4bed64c074\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" Mar 7 00:54:08.707507 kubelet[2769]: I0307 00:54:08.707203 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-4bed64c074" podStartSLOduration=1.707175448 podStartE2EDuration="1.707175448s" podCreationTimestamp="2026-03-07 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:08.689971583 +0000 UTC m=+1.199535083" watchObservedRunningTime="2026-03-07 00:54:08.707175448 +0000 UTC m=+1.216738988" Mar 7 00:54:08.722410 kubelet[2769]: I0307 00:54:08.721830 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-4bed64c074" podStartSLOduration=1.7218111139999999 podStartE2EDuration="1.721811114s" podCreationTimestamp="2026-03-07 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:08.707563752 +0000 UTC m=+1.217127252" watchObservedRunningTime="2026-03-07 00:54:08.721811114 +0000 UTC m=+1.231374654" Mar 7 00:54:08.739935 kubelet[2769]: I0307 00:54:08.739845 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-4bed64c074" podStartSLOduration=1.739820428 podStartE2EDuration="1.739820428s" podCreationTimestamp="2026-03-07 00:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:08.723801117 +0000 UTC m=+1.233364617" watchObservedRunningTime="2026-03-07 00:54:08.739820428 +0000 UTC m=+1.249383928" Mar 7 00:54:11.894862 kubelet[2769]: I0307 00:54:11.894770 2769 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:54:11.895603 containerd[1593]: time="2026-03-07T00:54:11.895151275Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:54:11.896718 kubelet[2769]: I0307 00:54:11.896237 2769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:54:12.966516 kubelet[2769]: I0307 00:54:12.966417 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7555dfbb-c198-419d-b125-e08fb5596851-kube-proxy\") pod \"kube-proxy-42zjt\" (UID: \"7555dfbb-c198-419d-b125-e08fb5596851\") " pod="kube-system/kube-proxy-42zjt" Mar 7 00:54:12.966516 kubelet[2769]: I0307 00:54:12.966464 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7555dfbb-c198-419d-b125-e08fb5596851-lib-modules\") pod \"kube-proxy-42zjt\" (UID: \"7555dfbb-c198-419d-b125-e08fb5596851\") " pod="kube-system/kube-proxy-42zjt" Mar 7 00:54:12.966516 kubelet[2769]: I0307 00:54:12.966491 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7555dfbb-c198-419d-b125-e08fb5596851-xtables-lock\") pod \"kube-proxy-42zjt\" (UID: \"7555dfbb-c198-419d-b125-e08fb5596851\") " pod="kube-system/kube-proxy-42zjt" Mar 7 00:54:12.966516 kubelet[2769]: I0307 00:54:12.966583 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw2qp\" (UniqueName: \"kubernetes.io/projected/7555dfbb-c198-419d-b125-e08fb5596851-kube-api-access-tw2qp\") pod \"kube-proxy-42zjt\" (UID: \"7555dfbb-c198-419d-b125-e08fb5596851\") " pod="kube-system/kube-proxy-42zjt" Mar 7 00:54:13.168295 kubelet[2769]: I0307 00:54:13.168123 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfg84\" (UniqueName: \"kubernetes.io/projected/e6c448ed-67e5-4770-a531-ec052a224a55-kube-api-access-kfg84\") pod \"tigera-operator-6bf85f8dd-c2g8d\" (UID: \"e6c448ed-67e5-4770-a531-ec052a224a55\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c2g8d" Mar 7 00:54:13.168295 kubelet[2769]: I0307 00:54:13.168201 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6c448ed-67e5-4770-a531-ec052a224a55-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-c2g8d\" (UID: \"e6c448ed-67e5-4770-a531-ec052a224a55\") " pod="tigera-operator/tigera-operator-6bf85f8dd-c2g8d" Mar 7 00:54:13.196578 containerd[1593]: time="2026-03-07T00:54:13.196458490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-42zjt,Uid:7555dfbb-c198-419d-b125-e08fb5596851,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:13.225727 containerd[1593]: time="2026-03-07T00:54:13.225078926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:13.225727 containerd[1593]: time="2026-03-07T00:54:13.225146170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:13.225727 containerd[1593]: time="2026-03-07T00:54:13.225167491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.225727 containerd[1593]: time="2026-03-07T00:54:13.225268897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.265222 containerd[1593]: time="2026-03-07T00:54:13.265170667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-42zjt,Uid:7555dfbb-c198-419d-b125-e08fb5596851,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e9bf65f9ecc219b34282f362a15cdd07825d8e40b8fc85be95e4236e05c79b5\"" Mar 7 00:54:13.275548 containerd[1593]: time="2026-03-07T00:54:13.273465638Z" level=info msg="CreateContainer within sandbox \"4e9bf65f9ecc219b34282f362a15cdd07825d8e40b8fc85be95e4236e05c79b5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:54:13.291782 containerd[1593]: time="2026-03-07T00:54:13.291703510Z" level=info msg="CreateContainer within sandbox \"4e9bf65f9ecc219b34282f362a15cdd07825d8e40b8fc85be95e4236e05c79b5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6c830216e5841b79dc93b728eba0768357c253eb65cfd048a7398d4296c3100e\"" Mar 7 00:54:13.293607 containerd[1593]: time="2026-03-07T00:54:13.292665362Z" level=info msg="StartContainer for \"6c830216e5841b79dc93b728eba0768357c253eb65cfd048a7398d4296c3100e\"" Mar 7 00:54:13.350955 containerd[1593]: time="2026-03-07T00:54:13.350866168Z" level=info msg="StartContainer for \"6c830216e5841b79dc93b728eba0768357c253eb65cfd048a7398d4296c3100e\" returns successfully" Mar 7 00:54:13.429777 containerd[1593]: time="2026-03-07T00:54:13.429716296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c2g8d,Uid:e6c448ed-67e5-4770-a531-ec052a224a55,Namespace:tigera-operator,Attempt:0,}" Mar 7 00:54:13.456519 containerd[1593]: time="2026-03-07T00:54:13.455627265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:13.456519 containerd[1593]: time="2026-03-07T00:54:13.455708150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:13.456519 containerd[1593]: time="2026-03-07T00:54:13.455756392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.456519 containerd[1593]: time="2026-03-07T00:54:13.455895160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:13.520176 containerd[1593]: time="2026-03-07T00:54:13.520134334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-c2g8d,Uid:e6c448ed-67e5-4770-a531-ec052a224a55,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"934c8079256d2fd4729aa354f8827edc0936864af1165a846aa21ba3bccbdace\"" Mar 7 00:54:13.524833 containerd[1593]: time="2026-03-07T00:54:13.524792147Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 00:54:15.014097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3628110853.mount: Deactivated successfully. Mar 7 00:54:15.143839 kubelet[2769]: I0307 00:54:15.143591 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-42zjt" podStartSLOduration=3.143574139 podStartE2EDuration="3.143574139s" podCreationTimestamp="2026-03-07 00:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:13.693145064 +0000 UTC m=+6.202708644" watchObservedRunningTime="2026-03-07 00:54:15.143574139 +0000 UTC m=+7.653137599" Mar 7 00:54:15.479095 containerd[1593]: time="2026-03-07T00:54:15.477956048Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:15.479095 containerd[1593]: time="2026-03-07T00:54:15.478979661Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 00:54:15.479838 containerd[1593]: time="2026-03-07T00:54:15.479806184Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:15.482201 containerd[1593]: time="2026-03-07T00:54:15.482160547Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:15.483113 containerd[1593]: time="2026-03-07T00:54:15.483075794Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 1.957422601s" Mar 7 00:54:15.483113 containerd[1593]: time="2026-03-07T00:54:15.483109436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 00:54:15.487847 containerd[1593]: time="2026-03-07T00:54:15.487821081Z" level=info msg="CreateContainer within sandbox \"934c8079256d2fd4729aa354f8827edc0936864af1165a846aa21ba3bccbdace\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 00:54:15.499310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3396623535.mount: Deactivated successfully. Mar 7 00:54:15.509722 containerd[1593]: time="2026-03-07T00:54:15.509645696Z" level=info msg="CreateContainer within sandbox \"934c8079256d2fd4729aa354f8827edc0936864af1165a846aa21ba3bccbdace\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc\"" Mar 7 00:54:15.510345 containerd[1593]: time="2026-03-07T00:54:15.510285689Z" level=info msg="StartContainer for \"2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc\"" Mar 7 00:54:15.558297 containerd[1593]: time="2026-03-07T00:54:15.558243423Z" level=info msg="StartContainer for \"2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc\" returns successfully" Mar 7 00:54:15.715869 kubelet[2769]: I0307 00:54:15.715789 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-c2g8d" podStartSLOduration=0.754805905 podStartE2EDuration="2.715770456s" podCreationTimestamp="2026-03-07 00:54:13 +0000 UTC" firstStartedPulling="2026-03-07 00:54:13.523048212 +0000 UTC m=+6.032611712" lastFinishedPulling="2026-03-07 00:54:15.484012763 +0000 UTC m=+7.993576263" observedRunningTime="2026-03-07 00:54:15.701872133 +0000 UTC m=+8.211435633" watchObservedRunningTime="2026-03-07 00:54:15.715770456 +0000 UTC m=+8.225333956" Mar 7 00:54:21.777741 sudo[1882]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:21.874518 sshd[1862]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:21.877711 systemd-logind[1562]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:54:21.880861 systemd[1]: sshd@6-188.245.55.131:22-20.161.92.111:57404.service: Deactivated successfully. Mar 7 00:54:21.886367 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:54:21.893662 systemd-logind[1562]: Removed session 7. Mar 7 00:54:27.960985 kubelet[2769]: I0307 00:54:27.960805 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f666c94d-8c4e-4479-8fec-0927a42599c4-tigera-ca-bundle\") pod \"calico-typha-6cdf876dd7-4sznd\" (UID: \"f666c94d-8c4e-4479-8fec-0927a42599c4\") " pod="calico-system/calico-typha-6cdf876dd7-4sznd" Mar 7 00:54:27.960985 kubelet[2769]: I0307 00:54:27.960897 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vjpx\" (UniqueName: \"kubernetes.io/projected/f666c94d-8c4e-4479-8fec-0927a42599c4-kube-api-access-2vjpx\") pod \"calico-typha-6cdf876dd7-4sznd\" (UID: \"f666c94d-8c4e-4479-8fec-0927a42599c4\") " pod="calico-system/calico-typha-6cdf876dd7-4sznd" Mar 7 00:54:27.960985 kubelet[2769]: I0307 00:54:27.960922 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f666c94d-8c4e-4479-8fec-0927a42599c4-typha-certs\") pod \"calico-typha-6cdf876dd7-4sznd\" (UID: \"f666c94d-8c4e-4479-8fec-0927a42599c4\") " pod="calico-system/calico-typha-6cdf876dd7-4sznd" Mar 7 00:54:28.064695 kubelet[2769]: I0307 00:54:28.063415 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-nodeproc\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064695 kubelet[2769]: I0307 00:54:28.064328 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-cni-log-dir\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064695 kubelet[2769]: I0307 00:54:28.064381 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-cni-net-dir\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064695 kubelet[2769]: I0307 00:54:28.064404 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c02fca6d-0e28-4e5e-8044-51e30341d190-node-certs\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064695 kubelet[2769]: I0307 00:54:28.064447 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-policysync\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064994 kubelet[2769]: I0307 00:54:28.064470 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-var-run-calico\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064994 kubelet[2769]: I0307 00:54:28.064496 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-flexvol-driver-host\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064994 kubelet[2769]: I0307 00:54:28.064573 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-xtables-lock\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064994 kubelet[2769]: I0307 00:54:28.064599 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-var-lib-calico\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.064994 kubelet[2769]: I0307 00:54:28.064638 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlvjk\" (UniqueName: \"kubernetes.io/projected/c02fca6d-0e28-4e5e-8044-51e30341d190-kube-api-access-tlvjk\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.065176 kubelet[2769]: I0307 00:54:28.064660 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-bpffs\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.065176 kubelet[2769]: I0307 00:54:28.064681 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-lib-modules\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.065176 kubelet[2769]: I0307 00:54:28.064734 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c02fca6d-0e28-4e5e-8044-51e30341d190-tigera-ca-bundle\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.065176 kubelet[2769]: I0307 00:54:28.064757 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-cni-bin-dir\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.065176 kubelet[2769]: I0307 00:54:28.064782 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/c02fca6d-0e28-4e5e-8044-51e30341d190-sys-fs\") pod \"calico-node-86xsf\" (UID: \"c02fca6d-0e28-4e5e-8044-51e30341d190\") " pod="calico-system/calico-node-86xsf" Mar 7 00:54:28.143123 kubelet[2769]: E0307 00:54:28.142373 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:28.165668 kubelet[2769]: I0307 00:54:28.165375 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b-registration-dir\") pod \"csi-node-driver-t6sst\" (UID: \"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b\") " pod="calico-system/csi-node-driver-t6sst" Mar 7 00:54:28.167281 kubelet[2769]: I0307 00:54:28.166302 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b-varrun\") pod \"csi-node-driver-t6sst\" (UID: \"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b\") " pod="calico-system/csi-node-driver-t6sst" Mar 7 00:54:28.167708 kubelet[2769]: I0307 00:54:28.167052 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b-kubelet-dir\") pod \"csi-node-driver-t6sst\" (UID: \"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b\") " pod="calico-system/csi-node-driver-t6sst" Mar 7 00:54:28.167928 kubelet[2769]: I0307 00:54:28.167825 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2pdg\" (UniqueName: \"kubernetes.io/projected/7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b-kube-api-access-m2pdg\") pod \"csi-node-driver-t6sst\" (UID: \"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b\") " pod="calico-system/csi-node-driver-t6sst" Mar 7 00:54:28.168032 kubelet[2769]: I0307 00:54:28.168016 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b-socket-dir\") pod \"csi-node-driver-t6sst\" (UID: \"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b\") " pod="calico-system/csi-node-driver-t6sst" Mar 7 00:54:28.177507 kubelet[2769]: E0307 00:54:28.177235 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.177507 kubelet[2769]: W0307 00:54:28.177491 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.177680 kubelet[2769]: E0307 00:54:28.177524 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.191017 kubelet[2769]: E0307 00:54:28.190888 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.191017 kubelet[2769]: W0307 00:54:28.190916 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.191017 kubelet[2769]: E0307 00:54:28.190975 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.225847 containerd[1593]: time="2026-03-07T00:54:28.225695249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cdf876dd7-4sznd,Uid:f666c94d-8c4e-4479-8fec-0927a42599c4,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:28.254815 containerd[1593]: time="2026-03-07T00:54:28.254669071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:28.254815 containerd[1593]: time="2026-03-07T00:54:28.254743914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:28.255910 containerd[1593]: time="2026-03-07T00:54:28.254754835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:28.255910 containerd[1593]: time="2026-03-07T00:54:28.254848879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:28.269419 kubelet[2769]: E0307 00:54:28.269386 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.269419 kubelet[2769]: W0307 00:54:28.269409 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.269593 kubelet[2769]: E0307 00:54:28.269431 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.269808 kubelet[2769]: E0307 00:54:28.269784 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.269808 kubelet[2769]: W0307 00:54:28.269802 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.270045 kubelet[2769]: E0307 00:54:28.269816 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.270595 kubelet[2769]: E0307 00:54:28.270392 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.270595 kubelet[2769]: W0307 00:54:28.270412 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.270595 kubelet[2769]: E0307 00:54:28.270427 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.271694 kubelet[2769]: E0307 00:54:28.271516 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.271694 kubelet[2769]: W0307 00:54:28.271565 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.271694 kubelet[2769]: E0307 00:54:28.271580 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.272812 kubelet[2769]: E0307 00:54:28.272778 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.273349 kubelet[2769]: W0307 00:54:28.273062 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.273349 kubelet[2769]: E0307 00:54:28.273137 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.273860 kubelet[2769]: E0307 00:54:28.273642 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.273860 kubelet[2769]: W0307 00:54:28.273668 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.273860 kubelet[2769]: E0307 00:54:28.273691 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.274231 kubelet[2769]: E0307 00:54:28.274206 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.274348 kubelet[2769]: W0307 00:54:28.274324 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.274620 kubelet[2769]: E0307 00:54:28.274455 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.275680 kubelet[2769]: E0307 00:54:28.275440 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.275680 kubelet[2769]: W0307 00:54:28.275469 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.275680 kubelet[2769]: E0307 00:54:28.275494 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.276697 kubelet[2769]: E0307 00:54:28.276480 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.276697 kubelet[2769]: W0307 00:54:28.276510 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.276697 kubelet[2769]: E0307 00:54:28.276555 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.277440 kubelet[2769]: E0307 00:54:28.277197 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.277440 kubelet[2769]: W0307 00:54:28.277221 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.277440 kubelet[2769]: E0307 00:54:28.277245 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.278243 kubelet[2769]: E0307 00:54:28.277980 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.278243 kubelet[2769]: W0307 00:54:28.278006 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.278243 kubelet[2769]: E0307 00:54:28.278028 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.278966 kubelet[2769]: E0307 00:54:28.278710 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.278966 kubelet[2769]: W0307 00:54:28.278735 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.278966 kubelet[2769]: E0307 00:54:28.278757 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.279744 kubelet[2769]: E0307 00:54:28.279728 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.279917 kubelet[2769]: W0307 00:54:28.279820 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.279917 kubelet[2769]: E0307 00:54:28.279839 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.280209 kubelet[2769]: E0307 00:54:28.280089 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.280209 kubelet[2769]: W0307 00:54:28.280101 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.280209 kubelet[2769]: E0307 00:54:28.280141 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.280569 kubelet[2769]: E0307 00:54:28.280415 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.280569 kubelet[2769]: W0307 00:54:28.280426 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.280569 kubelet[2769]: E0307 00:54:28.280438 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.281739 kubelet[2769]: E0307 00:54:28.280786 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.281739 kubelet[2769]: W0307 00:54:28.280802 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.281739 kubelet[2769]: E0307 00:54:28.280813 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.282241 kubelet[2769]: E0307 00:54:28.282225 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.282332 kubelet[2769]: W0307 00:54:28.282319 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.282441 kubelet[2769]: E0307 00:54:28.282390 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.282805 kubelet[2769]: E0307 00:54:28.282669 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.282805 kubelet[2769]: W0307 00:54:28.282681 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.282805 kubelet[2769]: E0307 00:54:28.282692 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.284178 kubelet[2769]: E0307 00:54:28.283974 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.284178 kubelet[2769]: W0307 00:54:28.283991 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.284178 kubelet[2769]: E0307 00:54:28.284004 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.285649 kubelet[2769]: E0307 00:54:28.284368 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.285649 kubelet[2769]: W0307 00:54:28.285433 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.285649 kubelet[2769]: E0307 00:54:28.285458 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.286587 kubelet[2769]: E0307 00:54:28.286427 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.286587 kubelet[2769]: W0307 00:54:28.286560 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.286587 kubelet[2769]: E0307 00:54:28.286575 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.288969 kubelet[2769]: E0307 00:54:28.287915 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.288969 kubelet[2769]: W0307 00:54:28.288121 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.288969 kubelet[2769]: E0307 00:54:28.288137 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.289142 kubelet[2769]: E0307 00:54:28.289043 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.289142 kubelet[2769]: W0307 00:54:28.289053 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.289142 kubelet[2769]: E0307 00:54:28.289065 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.290463 kubelet[2769]: E0307 00:54:28.290412 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.290463 kubelet[2769]: W0307 00:54:28.290433 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.291836 kubelet[2769]: E0307 00:54:28.291804 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.292595 kubelet[2769]: E0307 00:54:28.292568 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.292595 kubelet[2769]: W0307 00:54:28.292590 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.292696 kubelet[2769]: E0307 00:54:28.292604 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.299400 kubelet[2769]: E0307 00:54:28.299290 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:28.299400 kubelet[2769]: W0307 00:54:28.299313 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:28.299400 kubelet[2769]: E0307 00:54:28.299359 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:28.315369 containerd[1593]: time="2026-03-07T00:54:28.315282628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cdf876dd7-4sznd,Uid:f666c94d-8c4e-4479-8fec-0927a42599c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b760952e94bb9a708768ad90bad3f1644de765e7173383c95e7c29ede854d71d\"" Mar 7 00:54:28.318384 containerd[1593]: time="2026-03-07T00:54:28.318035944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 00:54:28.339374 containerd[1593]: time="2026-03-07T00:54:28.339294400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86xsf,Uid:c02fca6d-0e28-4e5e-8044-51e30341d190,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:28.367838 containerd[1593]: time="2026-03-07T00:54:28.367208778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:28.368706 containerd[1593]: time="2026-03-07T00:54:28.368465951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:28.368706 containerd[1593]: time="2026-03-07T00:54:28.368489512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:28.369991 containerd[1593]: time="2026-03-07T00:54:28.368782004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:28.404879 containerd[1593]: time="2026-03-07T00:54:28.404841325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-86xsf,Uid:c02fca6d-0e28-4e5e-8044-51e30341d190,Namespace:calico-system,Attempt:0,} returns sandbox id \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\"" Mar 7 00:54:29.565406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2971659381.mount: Deactivated successfully. Mar 7 00:54:29.628068 kubelet[2769]: E0307 00:54:29.627287 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:30.097208 containerd[1593]: time="2026-03-07T00:54:30.097119388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.098558 containerd[1593]: time="2026-03-07T00:54:30.098387480Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.098558 containerd[1593]: time="2026-03-07T00:54:30.098457723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 00:54:30.100882 containerd[1593]: time="2026-03-07T00:54:30.100818501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.101725 containerd[1593]: time="2026-03-07T00:54:30.101606093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.783530948s" Mar 7 00:54:30.101725 containerd[1593]: time="2026-03-07T00:54:30.101637895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 00:54:30.102894 containerd[1593]: time="2026-03-07T00:54:30.102865745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 00:54:30.120892 containerd[1593]: time="2026-03-07T00:54:30.120849687Z" level=info msg="CreateContainer within sandbox \"b760952e94bb9a708768ad90bad3f1644de765e7173383c95e7c29ede854d71d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 00:54:30.135088 containerd[1593]: time="2026-03-07T00:54:30.134964310Z" level=info msg="CreateContainer within sandbox \"b760952e94bb9a708768ad90bad3f1644de765e7173383c95e7c29ede854d71d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"31a9badc5160078d6e0a6f1226dea1a3e07db7f70281f2595b37ebaee361115c\"" Mar 7 00:54:30.136701 containerd[1593]: time="2026-03-07T00:54:30.135694500Z" level=info msg="StartContainer for \"31a9badc5160078d6e0a6f1226dea1a3e07db7f70281f2595b37ebaee361115c\"" Mar 7 00:54:30.204158 containerd[1593]: time="2026-03-07T00:54:30.204037681Z" level=info msg="StartContainer for \"31a9badc5160078d6e0a6f1226dea1a3e07db7f70281f2595b37ebaee361115c\" returns successfully" Mar 7 00:54:30.766427 kubelet[2769]: E0307 00:54:30.766136 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.766427 kubelet[2769]: W0307 00:54:30.766233 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.766427 kubelet[2769]: E0307 00:54:30.766268 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.767891 kubelet[2769]: E0307 00:54:30.767279 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.767891 kubelet[2769]: W0307 00:54:30.767315 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.767891 kubelet[2769]: E0307 00:54:30.767391 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.768612 kubelet[2769]: E0307 00:54:30.768354 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.768612 kubelet[2769]: W0307 00:54:30.768409 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.768612 kubelet[2769]: E0307 00:54:30.768436 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.769484 kubelet[2769]: E0307 00:54:30.769305 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.769811 kubelet[2769]: W0307 00:54:30.769328 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.769811 kubelet[2769]: E0307 00:54:30.769655 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.770314 kubelet[2769]: E0307 00:54:30.770153 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.770314 kubelet[2769]: W0307 00:54:30.770174 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.770314 kubelet[2769]: E0307 00:54:30.770216 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.771081 kubelet[2769]: E0307 00:54:30.771065 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.771397 kubelet[2769]: W0307 00:54:30.771144 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.771397 kubelet[2769]: E0307 00:54:30.771160 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.771594 kubelet[2769]: E0307 00:54:30.771579 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.771747 kubelet[2769]: W0307 00:54:30.771676 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.771747 kubelet[2769]: E0307 00:54:30.771694 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.772326 kubelet[2769]: E0307 00:54:30.772096 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.772326 kubelet[2769]: W0307 00:54:30.772109 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.772326 kubelet[2769]: E0307 00:54:30.772120 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.772462 kubelet[2769]: E0307 00:54:30.772438 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.772462 kubelet[2769]: W0307 00:54:30.772457 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.772517 kubelet[2769]: E0307 00:54:30.772470 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.772644 kubelet[2769]: E0307 00:54:30.772630 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.772644 kubelet[2769]: W0307 00:54:30.772640 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.772732 kubelet[2769]: E0307 00:54:30.772649 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.772785 kubelet[2769]: E0307 00:54:30.772774 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.772816 kubelet[2769]: W0307 00:54:30.772784 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.772816 kubelet[2769]: E0307 00:54:30.772793 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.772936 kubelet[2769]: E0307 00:54:30.772926 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.772975 kubelet[2769]: W0307 00:54:30.772937 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.772975 kubelet[2769]: E0307 00:54:30.772945 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.773093 kubelet[2769]: E0307 00:54:30.773083 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.773127 kubelet[2769]: W0307 00:54:30.773094 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.773127 kubelet[2769]: E0307 00:54:30.773102 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.773271 kubelet[2769]: E0307 00:54:30.773247 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.773271 kubelet[2769]: W0307 00:54:30.773255 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.773271 kubelet[2769]: E0307 00:54:30.773263 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.773415 kubelet[2769]: E0307 00:54:30.773404 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.773415 kubelet[2769]: W0307 00:54:30.773413 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.773472 kubelet[2769]: E0307 00:54:30.773422 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.798343 kubelet[2769]: E0307 00:54:30.798287 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.798343 kubelet[2769]: W0307 00:54:30.798315 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.798840 kubelet[2769]: E0307 00:54:30.798618 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.799060 kubelet[2769]: E0307 00:54:30.799042 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.799214 kubelet[2769]: W0307 00:54:30.799124 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.799214 kubelet[2769]: E0307 00:54:30.799143 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.799643 kubelet[2769]: E0307 00:54:30.799615 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.799725 kubelet[2769]: W0307 00:54:30.799646 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.799725 kubelet[2769]: E0307 00:54:30.799671 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.800016 kubelet[2769]: E0307 00:54:30.799995 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.800056 kubelet[2769]: W0307 00:54:30.800024 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.800056 kubelet[2769]: E0307 00:54:30.800044 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.800413 kubelet[2769]: E0307 00:54:30.800386 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.800458 kubelet[2769]: W0307 00:54:30.800415 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.800458 kubelet[2769]: E0307 00:54:30.800435 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.800838 kubelet[2769]: E0307 00:54:30.800815 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.801386 kubelet[2769]: W0307 00:54:30.800837 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.801386 kubelet[2769]: E0307 00:54:30.800856 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.801386 kubelet[2769]: E0307 00:54:30.801262 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.801386 kubelet[2769]: W0307 00:54:30.801280 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.801386 kubelet[2769]: E0307 00:54:30.801298 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.801792 kubelet[2769]: E0307 00:54:30.801753 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.801792 kubelet[2769]: W0307 00:54:30.801781 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.801871 kubelet[2769]: E0307 00:54:30.801801 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.802161 kubelet[2769]: E0307 00:54:30.802128 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.802216 kubelet[2769]: W0307 00:54:30.802164 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.802262 kubelet[2769]: E0307 00:54:30.802241 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.802597 kubelet[2769]: E0307 00:54:30.802579 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.802659 kubelet[2769]: W0307 00:54:30.802598 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.802659 kubelet[2769]: E0307 00:54:30.802616 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.802861 kubelet[2769]: E0307 00:54:30.802849 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.802861 kubelet[2769]: W0307 00:54:30.802861 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.802921 kubelet[2769]: E0307 00:54:30.802871 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.803285 kubelet[2769]: E0307 00:54:30.803247 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.803285 kubelet[2769]: W0307 00:54:30.803268 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.803285 kubelet[2769]: E0307 00:54:30.803280 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.803519 kubelet[2769]: E0307 00:54:30.803501 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.803573 kubelet[2769]: W0307 00:54:30.803519 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.803573 kubelet[2769]: E0307 00:54:30.803565 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.803766 kubelet[2769]: E0307 00:54:30.803752 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.803766 kubelet[2769]: W0307 00:54:30.803765 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.803818 kubelet[2769]: E0307 00:54:30.803775 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.803944 kubelet[2769]: E0307 00:54:30.803931 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.803980 kubelet[2769]: W0307 00:54:30.803945 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.803980 kubelet[2769]: E0307 00:54:30.803955 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.804214 kubelet[2769]: E0307 00:54:30.804200 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.804214 kubelet[2769]: W0307 00:54:30.804213 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.804271 kubelet[2769]: E0307 00:54:30.804223 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.804603 kubelet[2769]: E0307 00:54:30.804586 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.804603 kubelet[2769]: W0307 00:54:30.804602 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.804689 kubelet[2769]: E0307 00:54:30.804613 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:30.804816 kubelet[2769]: E0307 00:54:30.804804 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:30.804847 kubelet[2769]: W0307 00:54:30.804816 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:30.804847 kubelet[2769]: E0307 00:54:30.804826 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:31.348628 containerd[1593]: time="2026-03-07T00:54:31.347732942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:31.348628 containerd[1593]: time="2026-03-07T00:54:31.348587937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 00:54:31.349207 containerd[1593]: time="2026-03-07T00:54:31.349168121Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:31.351524 containerd[1593]: time="2026-03-07T00:54:31.351494896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:31.352264 containerd[1593]: time="2026-03-07T00:54:31.352226206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.249325139s" Mar 7 00:54:31.352315 containerd[1593]: time="2026-03-07T00:54:31.352265807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 00:54:31.359231 containerd[1593]: time="2026-03-07T00:54:31.359162969Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 00:54:31.374215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331460721.mount: Deactivated successfully. Mar 7 00:54:31.375656 containerd[1593]: time="2026-03-07T00:54:31.375420153Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"65c354eeda8fea9c306164df0583d663e3ec4e1001f37b46f9be195708bc3fbd\"" Mar 7 00:54:31.376398 containerd[1593]: time="2026-03-07T00:54:31.376340671Z" level=info msg="StartContainer for \"65c354eeda8fea9c306164df0583d663e3ec4e1001f37b46f9be195708bc3fbd\"" Mar 7 00:54:31.438429 containerd[1593]: time="2026-03-07T00:54:31.438380006Z" level=info msg="StartContainer for \"65c354eeda8fea9c306164df0583d663e3ec4e1001f37b46f9be195708bc3fbd\" returns successfully" Mar 7 00:54:31.579570 containerd[1593]: time="2026-03-07T00:54:31.579463891Z" level=info msg="shim disconnected" id=65c354eeda8fea9c306164df0583d663e3ec4e1001f37b46f9be195708bc3fbd namespace=k8s.io Mar 7 00:54:31.579570 containerd[1593]: time="2026-03-07T00:54:31.579520733Z" level=warning msg="cleaning up after shim disconnected" id=65c354eeda8fea9c306164df0583d663e3ec4e1001f37b46f9be195708bc3fbd namespace=k8s.io Mar 7 00:54:31.579570 containerd[1593]: time="2026-03-07T00:54:31.579547934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:31.627954 kubelet[2769]: E0307 00:54:31.627433 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:31.746370 kubelet[2769]: I0307 00:54:31.745976 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:31.753736 containerd[1593]: time="2026-03-07T00:54:31.753674169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 00:54:31.774600 kubelet[2769]: I0307 00:54:31.773685 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cdf876dd7-4sznd" podStartSLOduration=2.988711019 podStartE2EDuration="4.773670026s" podCreationTimestamp="2026-03-07 00:54:27 +0000 UTC" firstStartedPulling="2026-03-07 00:54:28.317721011 +0000 UTC m=+20.827284511" lastFinishedPulling="2026-03-07 00:54:30.102680018 +0000 UTC m=+22.612243518" observedRunningTime="2026-03-07 00:54:30.755352955 +0000 UTC m=+23.264916495" watchObservedRunningTime="2026-03-07 00:54:31.773670026 +0000 UTC m=+24.283233526" Mar 7 00:54:32.112898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c354eeda8fea9c306164df0583d663e3ec4e1001f37b46f9be195708bc3fbd-rootfs.mount: Deactivated successfully. Mar 7 00:54:33.629046 kubelet[2769]: E0307 00:54:33.628954 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:35.359807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2576381662.mount: Deactivated successfully. Mar 7 00:54:35.383286 containerd[1593]: time="2026-03-07T00:54:35.383208566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:35.384623 containerd[1593]: time="2026-03-07T00:54:35.384570020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 00:54:35.385596 containerd[1593]: time="2026-03-07T00:54:35.385542498Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:35.388922 containerd[1593]: time="2026-03-07T00:54:35.388554937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:35.390051 containerd[1593]: time="2026-03-07T00:54:35.389787386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 3.636053774s" Mar 7 00:54:35.390051 containerd[1593]: time="2026-03-07T00:54:35.389837108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 00:54:35.395122 containerd[1593]: time="2026-03-07T00:54:35.395090995Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 00:54:35.413713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940906950.mount: Deactivated successfully. Mar 7 00:54:35.419711 containerd[1593]: time="2026-03-07T00:54:35.419560560Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"ab1f1487d1feb0bbc5ceb347fc82a588f2b1f3e64cd8a6422ec807703fff6a93\"" Mar 7 00:54:35.421739 containerd[1593]: time="2026-03-07T00:54:35.420819890Z" level=info msg="StartContainer for \"ab1f1487d1feb0bbc5ceb347fc82a588f2b1f3e64cd8a6422ec807703fff6a93\"" Mar 7 00:54:35.496752 containerd[1593]: time="2026-03-07T00:54:35.496697204Z" level=info msg="StartContainer for \"ab1f1487d1feb0bbc5ceb347fc82a588f2b1f3e64cd8a6422ec807703fff6a93\" returns successfully" Mar 7 00:54:35.627319 kubelet[2769]: E0307 00:54:35.627180 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:35.753965 containerd[1593]: time="2026-03-07T00:54:35.753770547Z" level=info msg="shim disconnected" id=ab1f1487d1feb0bbc5ceb347fc82a588f2b1f3e64cd8a6422ec807703fff6a93 namespace=k8s.io Mar 7 00:54:35.753965 containerd[1593]: time="2026-03-07T00:54:35.753852829Z" level=warning msg="cleaning up after shim disconnected" id=ab1f1487d1feb0bbc5ceb347fc82a588f2b1f3e64cd8a6422ec807703fff6a93 namespace=k8s.io Mar 7 00:54:35.753965 containerd[1593]: time="2026-03-07T00:54:35.753868670Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:36.359512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab1f1487d1feb0bbc5ceb347fc82a588f2b1f3e64cd8a6422ec807703fff6a93-rootfs.mount: Deactivated successfully. Mar 7 00:54:36.767639 containerd[1593]: time="2026-03-07T00:54:36.766804301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 00:54:37.629008 kubelet[2769]: E0307 00:54:37.627836 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:38.986603 containerd[1593]: time="2026-03-07T00:54:38.986520879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.988053 containerd[1593]: time="2026-03-07T00:54:38.988005807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 00:54:38.990017 containerd[1593]: time="2026-03-07T00:54:38.988685833Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.996786 containerd[1593]: time="2026-03-07T00:54:38.996743860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.998011 containerd[1593]: time="2026-03-07T00:54:38.997971953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.231116454s" Mar 7 00:54:38.998011 containerd[1593]: time="2026-03-07T00:54:38.998009713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 00:54:39.004556 containerd[1593]: time="2026-03-07T00:54:39.004480980Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 00:54:39.019928 containerd[1593]: time="2026-03-07T00:54:39.019883754Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3afb3a0411575a2a4a86e802dc40b2aa0d202971cddd280b23aa541a1fc8f5de\"" Mar 7 00:54:39.021682 containerd[1593]: time="2026-03-07T00:54:39.021644839Z" level=info msg="StartContainer for \"3afb3a0411575a2a4a86e802dc40b2aa0d202971cddd280b23aa541a1fc8f5de\"" Mar 7 00:54:39.086477 containerd[1593]: time="2026-03-07T00:54:39.086434950Z" level=info msg="StartContainer for \"3afb3a0411575a2a4a86e802dc40b2aa0d202971cddd280b23aa541a1fc8f5de\" returns successfully" Mar 7 00:54:39.627606 kubelet[2769]: E0307 00:54:39.627564 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-t6sst" podUID="7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b" Mar 7 00:54:39.634623 kubelet[2769]: I0307 00:54:39.633965 2769 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 00:54:39.646090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3afb3a0411575a2a4a86e802dc40b2aa0d202971cddd280b23aa541a1fc8f5de-rootfs.mount: Deactivated successfully. Mar 7 00:54:39.727043 containerd[1593]: time="2026-03-07T00:54:39.726571818Z" level=info msg="shim disconnected" id=3afb3a0411575a2a4a86e802dc40b2aa0d202971cddd280b23aa541a1fc8f5de namespace=k8s.io Mar 7 00:54:39.727043 containerd[1593]: time="2026-03-07T00:54:39.726628257Z" level=warning msg="cleaning up after shim disconnected" id=3afb3a0411575a2a4a86e802dc40b2aa0d202971cddd280b23aa541a1fc8f5de namespace=k8s.io Mar 7 00:54:39.727043 containerd[1593]: time="2026-03-07T00:54:39.726640057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:39.778685 kubelet[2769]: I0307 00:54:39.777753 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrqc\" (UniqueName: \"kubernetes.io/projected/fdc0176d-e376-478f-8d38-7a94fccdf9e9-kube-api-access-hgrqc\") pod \"coredns-674b8bbfcf-swrxf\" (UID: \"fdc0176d-e376-478f-8d38-7a94fccdf9e9\") " pod="kube-system/coredns-674b8bbfcf-swrxf" Mar 7 00:54:39.778685 kubelet[2769]: I0307 00:54:39.777811 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-ca-bundle\") pod \"whisker-8459c55ddd-4rsh8\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " pod="calico-system/whisker-8459c55ddd-4rsh8" Mar 7 00:54:39.778685 kubelet[2769]: I0307 00:54:39.777832 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9286d8c3-e0b1-413a-999e-690d6a9c7331-tigera-ca-bundle\") pod \"calico-kube-controllers-6d9747f7f-9p8gb\" (UID: \"9286d8c3-e0b1-413a-999e-690d6a9c7331\") " pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" Mar 7 00:54:39.778685 kubelet[2769]: I0307 00:54:39.777851 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5pth\" (UniqueName: \"kubernetes.io/projected/9286d8c3-e0b1-413a-999e-690d6a9c7331-kube-api-access-h5pth\") pod \"calico-kube-controllers-6d9747f7f-9p8gb\" (UID: \"9286d8c3-e0b1-413a-999e-690d6a9c7331\") " pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" Mar 7 00:54:39.778685 kubelet[2769]: I0307 00:54:39.777884 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvhzl\" (UniqueName: \"kubernetes.io/projected/a347d5f2-52eb-4c7f-89b6-af577611cb2a-kube-api-access-wvhzl\") pod \"calico-apiserver-7696fc9784-bpxl5\" (UID: \"a347d5f2-52eb-4c7f-89b6-af577611cb2a\") " pod="calico-system/calico-apiserver-7696fc9784-bpxl5" Mar 7 00:54:39.778957 kubelet[2769]: I0307 00:54:39.777904 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7edcab55-c19a-4b82-b255-a65ce1665cd4-config-volume\") pod \"coredns-674b8bbfcf-v77d9\" (UID: \"7edcab55-c19a-4b82-b255-a65ce1665cd4\") " pod="kube-system/coredns-674b8bbfcf-v77d9" Mar 7 00:54:39.778957 kubelet[2769]: I0307 00:54:39.777921 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-backend-key-pair\") pod \"whisker-8459c55ddd-4rsh8\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " pod="calico-system/whisker-8459c55ddd-4rsh8" Mar 7 00:54:39.778957 kubelet[2769]: I0307 00:54:39.777955 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fmqn\" (UniqueName: \"kubernetes.io/projected/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-kube-api-access-8fmqn\") pod \"whisker-8459c55ddd-4rsh8\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " pod="calico-system/whisker-8459c55ddd-4rsh8" Mar 7 00:54:39.778957 kubelet[2769]: I0307 00:54:39.777988 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8ddv\" (UniqueName: \"kubernetes.io/projected/7edcab55-c19a-4b82-b255-a65ce1665cd4-kube-api-access-t8ddv\") pod \"coredns-674b8bbfcf-v77d9\" (UID: \"7edcab55-c19a-4b82-b255-a65ce1665cd4\") " pod="kube-system/coredns-674b8bbfcf-v77d9" Mar 7 00:54:39.778957 kubelet[2769]: I0307 00:54:39.778006 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7fnl\" (UniqueName: \"kubernetes.io/projected/c378d9fe-bc41-4a89-93f8-75b75a40f945-kube-api-access-z7fnl\") pod \"calico-apiserver-7696fc9784-rjnrl\" (UID: \"c378d9fe-bc41-4a89-93f8-75b75a40f945\") " pod="calico-system/calico-apiserver-7696fc9784-rjnrl" Mar 7 00:54:39.779070 kubelet[2769]: I0307 00:54:39.778040 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a347d5f2-52eb-4c7f-89b6-af577611cb2a-calico-apiserver-certs\") pod \"calico-apiserver-7696fc9784-bpxl5\" (UID: \"a347d5f2-52eb-4c7f-89b6-af577611cb2a\") " pod="calico-system/calico-apiserver-7696fc9784-bpxl5" Mar 7 00:54:39.779070 kubelet[2769]: I0307 00:54:39.778058 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c378d9fe-bc41-4a89-93f8-75b75a40f945-calico-apiserver-certs\") pod \"calico-apiserver-7696fc9784-rjnrl\" (UID: \"c378d9fe-bc41-4a89-93f8-75b75a40f945\") " pod="calico-system/calico-apiserver-7696fc9784-rjnrl" Mar 7 00:54:39.779070 kubelet[2769]: I0307 00:54:39.778074 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdc0176d-e376-478f-8d38-7a94fccdf9e9-config-volume\") pod \"coredns-674b8bbfcf-swrxf\" (UID: \"fdc0176d-e376-478f-8d38-7a94fccdf9e9\") " pod="kube-system/coredns-674b8bbfcf-swrxf" Mar 7 00:54:39.779070 kubelet[2769]: I0307 00:54:39.778091 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-nginx-config\") pod \"whisker-8459c55ddd-4rsh8\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " pod="calico-system/whisker-8459c55ddd-4rsh8" Mar 7 00:54:39.879186 kubelet[2769]: I0307 00:54:39.879025 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65d62761-8ce6-4210-8674-c27a3da452ab-config\") pod \"goldmane-5b85766d88-8l2rp\" (UID: \"65d62761-8ce6-4210-8674-c27a3da452ab\") " pod="calico-system/goldmane-5b85766d88-8l2rp" Mar 7 00:54:39.882749 kubelet[2769]: I0307 00:54:39.880403 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65d62761-8ce6-4210-8674-c27a3da452ab-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-8l2rp\" (UID: \"65d62761-8ce6-4210-8674-c27a3da452ab\") " pod="calico-system/goldmane-5b85766d88-8l2rp" Mar 7 00:54:39.883123 kubelet[2769]: I0307 00:54:39.883095 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/65d62761-8ce6-4210-8674-c27a3da452ab-goldmane-key-pair\") pod \"goldmane-5b85766d88-8l2rp\" (UID: \"65d62761-8ce6-4210-8674-c27a3da452ab\") " pod="calico-system/goldmane-5b85766d88-8l2rp" Mar 7 00:54:39.883285 kubelet[2769]: I0307 00:54:39.883263 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8h8n\" (UniqueName: \"kubernetes.io/projected/65d62761-8ce6-4210-8674-c27a3da452ab-kube-api-access-v8h8n\") pod \"goldmane-5b85766d88-8l2rp\" (UID: \"65d62761-8ce6-4210-8674-c27a3da452ab\") " pod="calico-system/goldmane-5b85766d88-8l2rp" Mar 7 00:54:40.032627 containerd[1593]: time="2026-03-07T00:54:40.032570700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9747f7f-9p8gb,Uid:9286d8c3-e0b1-413a-999e-690d6a9c7331,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:40.040067 containerd[1593]: time="2026-03-07T00:54:40.040001843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8459c55ddd-4rsh8,Uid:a4abfd16-30ee-4ab8-9dee-6ebda48e7005,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:40.041974 containerd[1593]: time="2026-03-07T00:54:40.041940368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swrxf,Uid:fdc0176d-e376-478f-8d38-7a94fccdf9e9,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:40.043935 containerd[1593]: time="2026-03-07T00:54:40.043897092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v77d9,Uid:7edcab55-c19a-4b82-b255-a65ce1665cd4,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:40.054059 containerd[1593]: time="2026-03-07T00:54:40.053977226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8l2rp,Uid:65d62761-8ce6-4210-8674-c27a3da452ab,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:40.056552 containerd[1593]: time="2026-03-07T00:54:40.055172924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-rjnrl,Uid:c378d9fe-bc41-4a89-93f8-75b75a40f945,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:40.066320 containerd[1593]: time="2026-03-07T00:54:40.066270480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-bpxl5,Uid:a347d5f2-52eb-4c7f-89b6-af577611cb2a,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:40.296432 containerd[1593]: time="2026-03-07T00:54:40.296388929Z" level=error msg="Failed to destroy network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.304848 containerd[1593]: time="2026-03-07T00:54:40.304801974Z" level=error msg="encountered an error cleaning up failed sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.305135 containerd[1593]: time="2026-03-07T00:54:40.305020930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v77d9,Uid:7edcab55-c19a-4b82-b255-a65ce1665cd4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.305505 kubelet[2769]: E0307 00:54:40.305300 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.305749 containerd[1593]: time="2026-03-07T00:54:40.305647118Z" level=error msg="Failed to destroy network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.305800 kubelet[2769]: E0307 00:54:40.305767 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v77d9" Mar 7 00:54:40.305836 kubelet[2769]: E0307 00:54:40.305799 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-v77d9" Mar 7 00:54:40.305931 kubelet[2769]: E0307 00:54:40.305853 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-v77d9_kube-system(7edcab55-c19a-4b82-b255-a65ce1665cd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-v77d9_kube-system(7edcab55-c19a-4b82-b255-a65ce1665cd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v77d9" podUID="7edcab55-c19a-4b82-b255-a65ce1665cd4" Mar 7 00:54:40.308595 containerd[1593]: time="2026-03-07T00:54:40.308379428Z" level=error msg="encountered an error cleaning up failed sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.309339 containerd[1593]: time="2026-03-07T00:54:40.308681703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-bpxl5,Uid:a347d5f2-52eb-4c7f-89b6-af577611cb2a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.309790 kubelet[2769]: E0307 00:54:40.309643 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.309933 kubelet[2769]: E0307 00:54:40.309807 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7696fc9784-bpxl5" Mar 7 00:54:40.309933 kubelet[2769]: E0307 00:54:40.309826 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7696fc9784-bpxl5" Mar 7 00:54:40.309933 kubelet[2769]: E0307 00:54:40.309890 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7696fc9784-bpxl5_calico-system(a347d5f2-52eb-4c7f-89b6-af577611cb2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7696fc9784-bpxl5_calico-system(a347d5f2-52eb-4c7f-89b6-af577611cb2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7696fc9784-bpxl5" podUID="a347d5f2-52eb-4c7f-89b6-af577611cb2a" Mar 7 00:54:40.332900 containerd[1593]: time="2026-03-07T00:54:40.332847698Z" level=error msg="Failed to destroy network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.336078 containerd[1593]: time="2026-03-07T00:54:40.335755565Z" level=error msg="encountered an error cleaning up failed sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.336201 containerd[1593]: time="2026-03-07T00:54:40.336010720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swrxf,Uid:fdc0176d-e376-478f-8d38-7a94fccdf9e9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.336493 kubelet[2769]: E0307 00:54:40.336374 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.336493 kubelet[2769]: E0307 00:54:40.336437 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-swrxf" Mar 7 00:54:40.336493 kubelet[2769]: E0307 00:54:40.336457 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-swrxf" Mar 7 00:54:40.336636 kubelet[2769]: E0307 00:54:40.336502 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-swrxf_kube-system(fdc0176d-e376-478f-8d38-7a94fccdf9e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-swrxf_kube-system(fdc0176d-e376-478f-8d38-7a94fccdf9e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-swrxf" podUID="fdc0176d-e376-478f-8d38-7a94fccdf9e9" Mar 7 00:54:40.339092 containerd[1593]: time="2026-03-07T00:54:40.338975545Z" level=error msg="Failed to destroy network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.339547 containerd[1593]: time="2026-03-07T00:54:40.339426897Z" level=error msg="encountered an error cleaning up failed sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.339547 containerd[1593]: time="2026-03-07T00:54:40.339475056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9747f7f-9p8gb,Uid:9286d8c3-e0b1-413a-999e-690d6a9c7331,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.339936 containerd[1593]: time="2026-03-07T00:54:40.339839730Z" level=error msg="Failed to destroy network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.340581 kubelet[2769]: E0307 00:54:40.340439 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.340828 kubelet[2769]: E0307 00:54:40.340737 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" Mar 7 00:54:40.340993 kubelet[2769]: E0307 00:54:40.340761 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" Mar 7 00:54:40.341360 containerd[1593]: time="2026-03-07T00:54:40.341092307Z" level=error msg="encountered an error cleaning up failed sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.341360 containerd[1593]: time="2026-03-07T00:54:40.341144666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8459c55ddd-4rsh8,Uid:a4abfd16-30ee-4ab8-9dee-6ebda48e7005,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.341480 kubelet[2769]: E0307 00:54:40.340954 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d9747f7f-9p8gb_calico-system(9286d8c3-e0b1-413a-999e-690d6a9c7331)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d9747f7f-9p8gb_calico-system(9286d8c3-e0b1-413a-999e-690d6a9c7331)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" podUID="9286d8c3-e0b1-413a-999e-690d6a9c7331" Mar 7 00:54:40.341757 kubelet[2769]: E0307 00:54:40.341288 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.341757 kubelet[2769]: E0307 00:54:40.341654 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8459c55ddd-4rsh8" Mar 7 00:54:40.341757 kubelet[2769]: E0307 00:54:40.341673 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8459c55ddd-4rsh8" Mar 7 00:54:40.341879 kubelet[2769]: E0307 00:54:40.341727 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8459c55ddd-4rsh8_calico-system(a4abfd16-30ee-4ab8-9dee-6ebda48e7005)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8459c55ddd-4rsh8_calico-system(a4abfd16-30ee-4ab8-9dee-6ebda48e7005)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8459c55ddd-4rsh8" podUID="a4abfd16-30ee-4ab8-9dee-6ebda48e7005" Mar 7 00:54:40.349444 containerd[1593]: time="2026-03-07T00:54:40.349154398Z" level=error msg="Failed to destroy network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.349572 containerd[1593]: time="2026-03-07T00:54:40.349462233Z" level=error msg="encountered an error cleaning up failed sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.349572 containerd[1593]: time="2026-03-07T00:54:40.349506992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-rjnrl,Uid:c378d9fe-bc41-4a89-93f8-75b75a40f945,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.351147 kubelet[2769]: E0307 00:54:40.351101 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.353552 kubelet[2769]: E0307 00:54:40.352777 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7696fc9784-rjnrl" Mar 7 00:54:40.353552 kubelet[2769]: E0307 00:54:40.352809 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7696fc9784-rjnrl" Mar 7 00:54:40.353552 kubelet[2769]: E0307 00:54:40.352858 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7696fc9784-rjnrl_calico-system(c378d9fe-bc41-4a89-93f8-75b75a40f945)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7696fc9784-rjnrl_calico-system(c378d9fe-bc41-4a89-93f8-75b75a40f945)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7696fc9784-rjnrl" podUID="c378d9fe-bc41-4a89-93f8-75b75a40f945" Mar 7 00:54:40.360316 containerd[1593]: time="2026-03-07T00:54:40.360264594Z" level=error msg="Failed to destroy network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.361101 containerd[1593]: time="2026-03-07T00:54:40.360870703Z" level=error msg="encountered an error cleaning up failed sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.361101 containerd[1593]: time="2026-03-07T00:54:40.361022620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8l2rp,Uid:65d62761-8ce6-4210-8674-c27a3da452ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.361820 kubelet[2769]: E0307 00:54:40.361507 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.362154 kubelet[2769]: E0307 00:54:40.361787 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-8l2rp" Mar 7 00:54:40.362154 kubelet[2769]: E0307 00:54:40.361931 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-8l2rp" Mar 7 00:54:40.362154 kubelet[2769]: E0307 00:54:40.361991 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-8l2rp_calico-system(65d62761-8ce6-4210-8674-c27a3da452ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-8l2rp_calico-system(65d62761-8ce6-4210-8674-c27a3da452ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-8l2rp" podUID="65d62761-8ce6-4210-8674-c27a3da452ab" Mar 7 00:54:40.791415 kubelet[2769]: I0307 00:54:40.788601 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:54:40.791811 containerd[1593]: time="2026-03-07T00:54:40.789417582Z" level=info msg="StopPodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\"" Mar 7 00:54:40.791811 containerd[1593]: time="2026-03-07T00:54:40.789623738Z" level=info msg="Ensure that sandbox 3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c in task-service has been cleanup successfully" Mar 7 00:54:40.798573 kubelet[2769]: I0307 00:54:40.797943 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:54:40.800303 containerd[1593]: time="2026-03-07T00:54:40.799391839Z" level=info msg="StopPodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\"" Mar 7 00:54:40.800303 containerd[1593]: time="2026-03-07T00:54:40.799568515Z" level=info msg="Ensure that sandbox d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee in task-service has been cleanup successfully" Mar 7 00:54:40.802673 containerd[1593]: time="2026-03-07T00:54:40.802123348Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 00:54:40.805616 kubelet[2769]: I0307 00:54:40.804961 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:54:40.805725 containerd[1593]: time="2026-03-07T00:54:40.805581885Z" level=info msg="StopPodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\"" Mar 7 00:54:40.805759 containerd[1593]: time="2026-03-07T00:54:40.805731802Z" level=info msg="Ensure that sandbox 02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319 in task-service has been cleanup successfully" Mar 7 00:54:40.844755 containerd[1593]: time="2026-03-07T00:54:40.844620207Z" level=error msg="StopPodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" failed" error="failed to destroy network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.845852 kubelet[2769]: E0307 00:54:40.845715 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:54:40.845852 kubelet[2769]: E0307 00:54:40.845786 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c"} Mar 7 00:54:40.845852 kubelet[2769]: E0307 00:54:40.845838 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c378d9fe-bc41-4a89-93f8-75b75a40f945\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.846052 kubelet[2769]: E0307 00:54:40.845859 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c378d9fe-bc41-4a89-93f8-75b75a40f945\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7696fc9784-rjnrl" podUID="c378d9fe-bc41-4a89-93f8-75b75a40f945" Mar 7 00:54:40.848601 kubelet[2769]: I0307 00:54:40.848272 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:54:40.851926 containerd[1593]: time="2026-03-07T00:54:40.851851554Z" level=info msg="CreateContainer within sandbox \"38f2c350c29ec424c8f8703c798c99a02c328bfc187a14c50e10be79a9e6ded9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6c53ab121bcb736a819ea78b9b116eda7f5a6c77431579626e2b8d0dd3b5aca5\"" Mar 7 00:54:40.852465 containerd[1593]: time="2026-03-07T00:54:40.852023711Z" level=info msg="StopPodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\"" Mar 7 00:54:40.853182 containerd[1593]: time="2026-03-07T00:54:40.853148610Z" level=info msg="Ensure that sandbox 6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a in task-service has been cleanup successfully" Mar 7 00:54:40.859829 containerd[1593]: time="2026-03-07T00:54:40.859778688Z" level=info msg="StartContainer for \"6c53ab121bcb736a819ea78b9b116eda7f5a6c77431579626e2b8d0dd3b5aca5\"" Mar 7 00:54:40.860556 kubelet[2769]: I0307 00:54:40.860396 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:54:40.862716 containerd[1593]: time="2026-03-07T00:54:40.862491078Z" level=info msg="StopPodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\"" Mar 7 00:54:40.862716 containerd[1593]: time="2026-03-07T00:54:40.862696594Z" level=info msg="Ensure that sandbox 6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51 in task-service has been cleanup successfully" Mar 7 00:54:40.871719 kubelet[2769]: I0307 00:54:40.871263 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:54:40.875501 containerd[1593]: time="2026-03-07T00:54:40.873099603Z" level=info msg="StopPodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\"" Mar 7 00:54:40.875501 containerd[1593]: time="2026-03-07T00:54:40.873373438Z" level=info msg="Ensure that sandbox 35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696 in task-service has been cleanup successfully" Mar 7 00:54:40.883157 kubelet[2769]: I0307 00:54:40.883124 2769 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:54:40.885541 containerd[1593]: time="2026-03-07T00:54:40.885380737Z" level=info msg="StopPodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\"" Mar 7 00:54:40.886348 containerd[1593]: time="2026-03-07T00:54:40.886132763Z" level=info msg="Ensure that sandbox 4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a in task-service has been cleanup successfully" Mar 7 00:54:40.925498 containerd[1593]: time="2026-03-07T00:54:40.925449360Z" level=error msg="StopPodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" failed" error="failed to destroy network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.925796 kubelet[2769]: E0307 00:54:40.925764 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:54:40.925900 kubelet[2769]: E0307 00:54:40.925881 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319"} Mar 7 00:54:40.925984 kubelet[2769]: E0307 00:54:40.925971 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.926103 kubelet[2769]: E0307 00:54:40.926084 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8459c55ddd-4rsh8" podUID="a4abfd16-30ee-4ab8-9dee-6ebda48e7005" Mar 7 00:54:40.950719 containerd[1593]: time="2026-03-07T00:54:40.950676337Z" level=error msg="StopPodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" failed" error="failed to destroy network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.951725 kubelet[2769]: E0307 00:54:40.951676 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:54:40.951859 kubelet[2769]: E0307 00:54:40.951829 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee"} Mar 7 00:54:40.951949 kubelet[2769]: E0307 00:54:40.951933 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"65d62761-8ce6-4210-8674-c27a3da452ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.952093 kubelet[2769]: E0307 00:54:40.952070 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"65d62761-8ce6-4210-8674-c27a3da452ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-8l2rp" podUID="65d62761-8ce6-4210-8674-c27a3da452ab" Mar 7 00:54:40.967704 containerd[1593]: time="2026-03-07T00:54:40.967654104Z" level=error msg="StopPodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" failed" error="failed to destroy network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.968400 kubelet[2769]: E0307 00:54:40.968346 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:54:40.968493 kubelet[2769]: E0307 00:54:40.968409 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a"} Mar 7 00:54:40.968493 kubelet[2769]: E0307 00:54:40.968445 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9286d8c3-e0b1-413a-999e-690d6a9c7331\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.968493 kubelet[2769]: E0307 00:54:40.968469 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9286d8c3-e0b1-413a-999e-690d6a9c7331\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" podUID="9286d8c3-e0b1-413a-999e-690d6a9c7331" Mar 7 00:54:40.978141 containerd[1593]: time="2026-03-07T00:54:40.978023514Z" level=error msg="StopPodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" failed" error="failed to destroy network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.978733 kubelet[2769]: E0307 00:54:40.978513 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:54:40.978733 kubelet[2769]: E0307 00:54:40.978620 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51"} Mar 7 00:54:40.978733 kubelet[2769]: E0307 00:54:40.978665 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a347d5f2-52eb-4c7f-89b6-af577611cb2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.978733 kubelet[2769]: E0307 00:54:40.978694 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a347d5f2-52eb-4c7f-89b6-af577611cb2a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7696fc9784-bpxl5" podUID="a347d5f2-52eb-4c7f-89b6-af577611cb2a" Mar 7 00:54:40.979078 containerd[1593]: time="2026-03-07T00:54:40.979048815Z" level=error msg="StopPodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" failed" error="failed to destroy network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.979365 kubelet[2769]: E0307 00:54:40.979330 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:54:40.979422 kubelet[2769]: E0307 00:54:40.979374 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a"} Mar 7 00:54:40.979422 kubelet[2769]: E0307 00:54:40.979399 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fdc0176d-e376-478f-8d38-7a94fccdf9e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.979422 kubelet[2769]: E0307 00:54:40.979417 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fdc0176d-e376-478f-8d38-7a94fccdf9e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-swrxf" podUID="fdc0176d-e376-478f-8d38-7a94fccdf9e9" Mar 7 00:54:40.987373 containerd[1593]: time="2026-03-07T00:54:40.987319863Z" level=error msg="StopPodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" failed" error="failed to destroy network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:40.987853 kubelet[2769]: E0307 00:54:40.987798 2769 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:54:40.987983 containerd[1593]: time="2026-03-07T00:54:40.987813254Z" level=info msg="StartContainer for \"6c53ab121bcb736a819ea78b9b116eda7f5a6c77431579626e2b8d0dd3b5aca5\" returns successfully" Mar 7 00:54:40.988121 kubelet[2769]: E0307 00:54:40.988074 2769 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696"} Mar 7 00:54:40.988322 kubelet[2769]: E0307 00:54:40.988246 2769 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7edcab55-c19a-4b82-b255-a65ce1665cd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:40.988560 kubelet[2769]: E0307 00:54:40.988274 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7edcab55-c19a-4b82-b255-a65ce1665cd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-v77d9" podUID="7edcab55-c19a-4b82-b255-a65ce1665cd4" Mar 7 00:54:41.019690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319-shm.mount: Deactivated successfully. Mar 7 00:54:41.019845 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a-shm.mount: Deactivated successfully. Mar 7 00:54:41.632960 containerd[1593]: time="2026-03-07T00:54:41.632415638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6sst,Uid:7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:41.832909 systemd-networkd[1246]: calic2405cb575b: Link UP Mar 7 00:54:41.833955 systemd-networkd[1246]: calic2405cb575b: Gained carrier Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.674 [ERROR][3920] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.712 [INFO][3920] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0 csi-node-driver- calico-system 7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b 701 0 2026-03-07 00:54:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 csi-node-driver-t6sst eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic2405cb575b [] [] }} ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.713 [INFO][3920] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.760 [INFO][3931] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" HandleID="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Workload="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.780 [INFO][3931] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" HandleID="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Workload="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005f6350), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"csi-node-driver-t6sst", "timestamp":"2026-03-07 00:54:41.760842863 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000399340)} Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.780 [INFO][3931] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.781 [INFO][3931] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.781 [INFO][3931] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.784 [INFO][3931] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.790 [INFO][3931] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.796 [INFO][3931] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.799 [INFO][3931] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.801 [INFO][3931] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.801 [INFO][3931] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.803 [INFO][3931] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.810 [INFO][3931] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.816 [INFO][3931] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.129/26] block=192.168.51.128/26 handle="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.816 [INFO][3931] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.129/26] handle="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.816 [INFO][3931] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:41.859915 containerd[1593]: 2026-03-07 00:54:41.816 [INFO][3931] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.129/26] IPv6=[] ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" HandleID="k8s-pod-network.817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Workload="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.861368 containerd[1593]: 2026-03-07 00:54:41.820 [INFO][3920] cni-plugin/k8s.go 418: Populated endpoint ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"csi-node-driver-t6sst", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2405cb575b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:41.861368 containerd[1593]: 2026-03-07 00:54:41.820 [INFO][3920] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.129/32] ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.861368 containerd[1593]: 2026-03-07 00:54:41.820 [INFO][3920] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2405cb575b ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.861368 containerd[1593]: 2026-03-07 00:54:41.835 [INFO][3920] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.861368 containerd[1593]: 2026-03-07 00:54:41.836 [INFO][3920] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c", Pod:"csi-node-driver-t6sst", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2405cb575b", MAC:"e2:7c:28:32:9d:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:41.861368 containerd[1593]: 2026-03-07 00:54:41.856 [INFO][3920] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c" Namespace="calico-system" Pod="csi-node-driver-t6sst" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-csi--node--driver--t6sst-eth0" Mar 7 00:54:41.877378 containerd[1593]: time="2026-03-07T00:54:41.876667141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:41.877378 containerd[1593]: time="2026-03-07T00:54:41.877178572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:41.877378 containerd[1593]: time="2026-03-07T00:54:41.877197652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:41.877378 containerd[1593]: time="2026-03-07T00:54:41.877305370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:41.899907 containerd[1593]: time="2026-03-07T00:54:41.893637934Z" level=info msg="StopPodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\"" Mar 7 00:54:41.936047 kubelet[2769]: I0307 00:54:41.935979 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-86xsf" podStartSLOduration=3.343201097 podStartE2EDuration="13.935951817s" podCreationTimestamp="2026-03-07 00:54:28 +0000 UTC" firstStartedPulling="2026-03-07 00:54:28.406053136 +0000 UTC m=+20.915616636" lastFinishedPulling="2026-03-07 00:54:38.998803856 +0000 UTC m=+31.508367356" observedRunningTime="2026-03-07 00:54:41.931342255 +0000 UTC m=+34.440905755" watchObservedRunningTime="2026-03-07 00:54:41.935951817 +0000 UTC m=+34.445515317" Mar 7 00:54:41.972098 containerd[1593]: time="2026-03-07T00:54:41.972061365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-t6sst,Uid:7f8c058f-55f4-4f8e-87a1-b3f0c8d90a3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c\"" Mar 7 00:54:41.975375 containerd[1593]: time="2026-03-07T00:54:41.975297471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.020 [INFO][3987] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.020 [INFO][3987] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" iface="eth0" netns="/var/run/netns/cni-dcbdf38a-4a85-63fc-e4fe-c7869ef487ac" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.020 [INFO][3987] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" iface="eth0" netns="/var/run/netns/cni-dcbdf38a-4a85-63fc-e4fe-c7869ef487ac" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.020 [INFO][3987] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" iface="eth0" netns="/var/run/netns/cni-dcbdf38a-4a85-63fc-e4fe-c7869ef487ac" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.020 [INFO][3987] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.020 [INFO][3987] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.056 [INFO][4027] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.056 [INFO][4027] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.056 [INFO][4027] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.070 [WARNING][4027] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.070 [INFO][4027] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.072 [INFO][4027] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:42.080047 containerd[1593]: 2026-03-07 00:54:42.074 [INFO][3987] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:54:42.089824 containerd[1593]: time="2026-03-07T00:54:42.089406703Z" level=info msg="TearDown network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" successfully" Mar 7 00:54:42.089824 containerd[1593]: time="2026-03-07T00:54:42.089481462Z" level=info msg="StopPodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" returns successfully" Mar 7 00:54:42.091215 systemd[1]: run-netns-cni\x2ddcbdf38a\x2d4a85\x2d63fc\x2de4fe\x2dc7869ef487ac.mount: Deactivated successfully. Mar 7 00:54:42.202668 kubelet[2769]: I0307 00:54:42.202494 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-nginx-config\") pod \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " Mar 7 00:54:42.203579 kubelet[2769]: I0307 00:54:42.202853 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-ca-bundle\") pod \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " Mar 7 00:54:42.203579 kubelet[2769]: I0307 00:54:42.202918 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fmqn\" (UniqueName: \"kubernetes.io/projected/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-kube-api-access-8fmqn\") pod \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " Mar 7 00:54:42.203579 kubelet[2769]: I0307 00:54:42.202958 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-backend-key-pair\") pod \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\" (UID: \"a4abfd16-30ee-4ab8-9dee-6ebda48e7005\") " Mar 7 00:54:42.204756 kubelet[2769]: I0307 00:54:42.204712 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a4abfd16-30ee-4ab8-9dee-6ebda48e7005" (UID: "a4abfd16-30ee-4ab8-9dee-6ebda48e7005"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:54:42.205725 kubelet[2769]: I0307 00:54:42.205678 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a4abfd16-30ee-4ab8-9dee-6ebda48e7005" (UID: "a4abfd16-30ee-4ab8-9dee-6ebda48e7005"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:54:42.207911 kubelet[2769]: I0307 00:54:42.207875 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-kube-api-access-8fmqn" (OuterVolumeSpecName: "kube-api-access-8fmqn") pod "a4abfd16-30ee-4ab8-9dee-6ebda48e7005" (UID: "a4abfd16-30ee-4ab8-9dee-6ebda48e7005"). InnerVolumeSpecName "kube-api-access-8fmqn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:54:42.208962 kubelet[2769]: I0307 00:54:42.208919 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a4abfd16-30ee-4ab8-9dee-6ebda48e7005" (UID: "a4abfd16-30ee-4ab8-9dee-6ebda48e7005"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:54:42.212658 systemd[1]: var-lib-kubelet-pods-a4abfd16\x2d30ee\x2d4ab8\x2d9dee\x2d6ebda48e7005-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fmqn.mount: Deactivated successfully. Mar 7 00:54:42.212815 systemd[1]: var-lib-kubelet-pods-a4abfd16\x2d30ee\x2d4ab8\x2d9dee\x2d6ebda48e7005-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 00:54:42.303684 kubelet[2769]: I0307 00:54:42.303610 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-ca-bundle\") on node \"ci-4081-3-6-n-4bed64c074\" DevicePath \"\"" Mar 7 00:54:42.303684 kubelet[2769]: I0307 00:54:42.303671 2769 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8fmqn\" (UniqueName: \"kubernetes.io/projected/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-kube-api-access-8fmqn\") on node \"ci-4081-3-6-n-4bed64c074\" DevicePath \"\"" Mar 7 00:54:42.303684 kubelet[2769]: I0307 00:54:42.303692 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-4bed64c074\" DevicePath \"\"" Mar 7 00:54:42.303940 kubelet[2769]: I0307 00:54:42.303712 2769 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a4abfd16-30ee-4ab8-9dee-6ebda48e7005-nginx-config\") on node \"ci-4081-3-6-n-4bed64c074\" DevicePath \"\"" Mar 7 00:54:43.114202 kubelet[2769]: I0307 00:54:43.114133 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d1a14012-3d13-47a9-bf08-9deb03d183e4-nginx-config\") pod \"whisker-6766b787fc-z7zlc\" (UID: \"d1a14012-3d13-47a9-bf08-9deb03d183e4\") " pod="calico-system/whisker-6766b787fc-z7zlc" Mar 7 00:54:43.114202 kubelet[2769]: I0307 00:54:43.114205 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d1a14012-3d13-47a9-bf08-9deb03d183e4-whisker-backend-key-pair\") pod \"whisker-6766b787fc-z7zlc\" (UID: \"d1a14012-3d13-47a9-bf08-9deb03d183e4\") " pod="calico-system/whisker-6766b787fc-z7zlc" Mar 7 00:54:43.115103 kubelet[2769]: I0307 00:54:43.114241 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1a14012-3d13-47a9-bf08-9deb03d183e4-whisker-ca-bundle\") pod \"whisker-6766b787fc-z7zlc\" (UID: \"d1a14012-3d13-47a9-bf08-9deb03d183e4\") " pod="calico-system/whisker-6766b787fc-z7zlc" Mar 7 00:54:43.115103 kubelet[2769]: I0307 00:54:43.114269 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4542z\" (UniqueName: \"kubernetes.io/projected/d1a14012-3d13-47a9-bf08-9deb03d183e4-kube-api-access-4542z\") pod \"whisker-6766b787fc-z7zlc\" (UID: \"d1a14012-3d13-47a9-bf08-9deb03d183e4\") " pod="calico-system/whisker-6766b787fc-z7zlc" Mar 7 00:54:43.270167 containerd[1593]: time="2026-03-07T00:54:43.269461623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6766b787fc-z7zlc,Uid:d1a14012-3d13-47a9-bf08-9deb03d183e4,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:43.423012 systemd-networkd[1246]: caliba8cea7a853: Link UP Mar 7 00:54:43.423345 systemd-networkd[1246]: caliba8cea7a853: Gained carrier Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.307 [ERROR][4154] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.326 [INFO][4154] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0 whisker-6766b787fc- calico-system d1a14012-3d13-47a9-bf08-9deb03d183e4 887 0 2026-03-07 00:54:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6766b787fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 whisker-6766b787fc-z7zlc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliba8cea7a853 [] [] }} ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.326 [INFO][4154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.357 [INFO][4166] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" HandleID="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.373 [INFO][4166] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" HandleID="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fba80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"whisker-6766b787fc-z7zlc", "timestamp":"2026-03-07 00:54:43.357164261 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003c71e0)} Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.373 [INFO][4166] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.374 [INFO][4166] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.374 [INFO][4166] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.378 [INFO][4166] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.386 [INFO][4166] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.391 [INFO][4166] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.393 [INFO][4166] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.396 [INFO][4166] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.396 [INFO][4166] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.398 [INFO][4166] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.406 [INFO][4166] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.414 [INFO][4166] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.130/26] block=192.168.51.128/26 handle="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.414 [INFO][4166] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.130/26] handle="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.414 [INFO][4166] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:43.447695 containerd[1593]: 2026-03-07 00:54:43.414 [INFO][4166] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.130/26] IPv6=[] ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" HandleID="k8s-pod-network.b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.448472 containerd[1593]: 2026-03-07 00:54:43.418 [INFO][4154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0", GenerateName:"whisker-6766b787fc-", Namespace:"calico-system", SelfLink:"", UID:"d1a14012-3d13-47a9-bf08-9deb03d183e4", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6766b787fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"whisker-6766b787fc-z7zlc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba8cea7a853", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:43.448472 containerd[1593]: 2026-03-07 00:54:43.418 [INFO][4154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.130/32] ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.448472 containerd[1593]: 2026-03-07 00:54:43.418 [INFO][4154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba8cea7a853 ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.448472 containerd[1593]: 2026-03-07 00:54:43.425 [INFO][4154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.448472 containerd[1593]: 2026-03-07 00:54:43.428 [INFO][4154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0", GenerateName:"whisker-6766b787fc-", Namespace:"calico-system", SelfLink:"", UID:"d1a14012-3d13-47a9-bf08-9deb03d183e4", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6766b787fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d", Pod:"whisker-6766b787fc-z7zlc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba8cea7a853", MAC:"d2:e2:fc:af:dc:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:43.448472 containerd[1593]: 2026-03-07 00:54:43.443 [INFO][4154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d" Namespace="calico-system" Pod="whisker-6766b787fc-z7zlc" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--6766b787fc--z7zlc-eth0" Mar 7 00:54:43.486334 containerd[1593]: time="2026-03-07T00:54:43.485975756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:43.486334 containerd[1593]: time="2026-03-07T00:54:43.486028635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:43.486334 containerd[1593]: time="2026-03-07T00:54:43.486039555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:43.486334 containerd[1593]: time="2026-03-07T00:54:43.486158114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:43.493630 systemd-networkd[1246]: calic2405cb575b: Gained IPv6LL Mar 7 00:54:43.582047 containerd[1593]: time="2026-03-07T00:54:43.581996836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:43.583085 containerd[1593]: time="2026-03-07T00:54:43.582961622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 00:54:43.584278 containerd[1593]: time="2026-03-07T00:54:43.584196045Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:43.587076 containerd[1593]: time="2026-03-07T00:54:43.586440133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6766b787fc-z7zlc,Uid:d1a14012-3d13-47a9-bf08-9deb03d183e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d\"" Mar 7 00:54:43.589088 containerd[1593]: time="2026-03-07T00:54:43.589045976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:43.590291 containerd[1593]: time="2026-03-07T00:54:43.589804085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.614462935s" Mar 7 00:54:43.590291 containerd[1593]: time="2026-03-07T00:54:43.589840525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 00:54:43.591511 containerd[1593]: time="2026-03-07T00:54:43.591481582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 00:54:43.595285 containerd[1593]: time="2026-03-07T00:54:43.595243608Z" level=info msg="CreateContainer within sandbox \"817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 00:54:43.615733 containerd[1593]: time="2026-03-07T00:54:43.615689359Z" level=info msg="CreateContainer within sandbox \"817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"451f48f2a34a6014faad72f3d701207c674e250836d06fd8f07714dc7134ccb1\"" Mar 7 00:54:43.619554 containerd[1593]: time="2026-03-07T00:54:43.617201697Z" level=info msg="StartContainer for \"451f48f2a34a6014faad72f3d701207c674e250836d06fd8f07714dc7134ccb1\"" Mar 7 00:54:43.630319 kubelet[2769]: I0307 00:54:43.630260 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4abfd16-30ee-4ab8-9dee-6ebda48e7005" path="/var/lib/kubelet/pods/a4abfd16-30ee-4ab8-9dee-6ebda48e7005/volumes" Mar 7 00:54:43.694458 containerd[1593]: time="2026-03-07T00:54:43.694193007Z" level=info msg="StartContainer for \"451f48f2a34a6014faad72f3d701207c674e250836d06fd8f07714dc7134ccb1\" returns successfully" Mar 7 00:54:44.771679 kubelet[2769]: I0307 00:54:44.770744 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:44.900958 systemd-networkd[1246]: caliba8cea7a853: Gained IPv6LL Mar 7 00:54:45.068970 containerd[1593]: time="2026-03-07T00:54:45.068805910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:45.070553 containerd[1593]: time="2026-03-07T00:54:45.070320333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 00:54:45.071209 containerd[1593]: time="2026-03-07T00:54:45.071167003Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:45.075627 containerd[1593]: time="2026-03-07T00:54:45.075357755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:45.076208 containerd[1593]: time="2026-03-07T00:54:45.076171545Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.484654924s" Mar 7 00:54:45.076282 containerd[1593]: time="2026-03-07T00:54:45.076207305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 00:54:45.079573 containerd[1593]: time="2026-03-07T00:54:45.078897514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 00:54:45.082431 containerd[1593]: time="2026-03-07T00:54:45.082162516Z" level=info msg="CreateContainer within sandbox \"b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 00:54:45.098629 containerd[1593]: time="2026-03-07T00:54:45.098260090Z" level=info msg="CreateContainer within sandbox \"b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f4869299f9e1621f0d02ed657ffe40c51d2169176101a3d18781890f1862da22\"" Mar 7 00:54:45.099405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199781866.mount: Deactivated successfully. Mar 7 00:54:45.101711 containerd[1593]: time="2026-03-07T00:54:45.099827152Z" level=info msg="StartContainer for \"f4869299f9e1621f0d02ed657ffe40c51d2169176101a3d18781890f1862da22\"" Mar 7 00:54:45.173193 containerd[1593]: time="2026-03-07T00:54:45.173128544Z" level=info msg="StartContainer for \"f4869299f9e1621f0d02ed657ffe40c51d2169176101a3d18781890f1862da22\" returns successfully" Mar 7 00:54:45.977582 kernel: calico-node[4389]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 00:54:46.380862 systemd-networkd[1246]: vxlan.calico: Link UP Mar 7 00:54:46.380870 systemd-networkd[1246]: vxlan.calico: Gained carrier Mar 7 00:54:46.677318 containerd[1593]: time="2026-03-07T00:54:46.677205587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:46.679105 containerd[1593]: time="2026-03-07T00:54:46.679068007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 00:54:46.681611 containerd[1593]: time="2026-03-07T00:54:46.680369034Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:46.683946 containerd[1593]: time="2026-03-07T00:54:46.683903078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:46.687852 containerd[1593]: time="2026-03-07T00:54:46.687021165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.608081093s" Mar 7 00:54:46.688061 containerd[1593]: time="2026-03-07T00:54:46.688016715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 00:54:46.691949 containerd[1593]: time="2026-03-07T00:54:46.690945325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 00:54:46.704388 containerd[1593]: time="2026-03-07T00:54:46.704351946Z" level=info msg="CreateContainer within sandbox \"817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 00:54:46.731915 containerd[1593]: time="2026-03-07T00:54:46.731509146Z" level=info msg="CreateContainer within sandbox \"817f301e4c7af50780609df801e4c2b1fe84be0c9793d0d1f20e54611deb9a9c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c6993cc4fd5a885abf261138089df518f5dbaa2b1eca7f9de0f9d541e469b605\"" Mar 7 00:54:46.733912 containerd[1593]: time="2026-03-07T00:54:46.733623644Z" level=info msg="StartContainer for \"c6993cc4fd5a885abf261138089df518f5dbaa2b1eca7f9de0f9d541e469b605\"" Mar 7 00:54:46.810718 containerd[1593]: time="2026-03-07T00:54:46.810661289Z" level=info msg="StartContainer for \"c6993cc4fd5a885abf261138089df518f5dbaa2b1eca7f9de0f9d541e469b605\" returns successfully" Mar 7 00:54:46.936437 kubelet[2769]: I0307 00:54:46.936252 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-t6sst" podStartSLOduration=14.22025011 podStartE2EDuration="18.936233592s" podCreationTimestamp="2026-03-07 00:54:28 +0000 UTC" firstStartedPulling="2026-03-07 00:54:41.974028972 +0000 UTC m=+34.483592472" lastFinishedPulling="2026-03-07 00:54:46.690012454 +0000 UTC m=+39.199575954" observedRunningTime="2026-03-07 00:54:46.934603049 +0000 UTC m=+39.444166589" watchObservedRunningTime="2026-03-07 00:54:46.936233592 +0000 UTC m=+39.445797092" Mar 7 00:54:47.831475 kubelet[2769]: I0307 00:54:47.831367 2769 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 00:54:47.831475 kubelet[2769]: I0307 00:54:47.831416 2769 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 00:54:48.230190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091024604.mount: Deactivated successfully. Mar 7 00:54:48.248655 containerd[1593]: time="2026-03-07T00:54:48.248602418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:48.250375 containerd[1593]: time="2026-03-07T00:54:48.250323445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 00:54:48.250796 containerd[1593]: time="2026-03-07T00:54:48.250704082Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:48.254470 containerd[1593]: time="2026-03-07T00:54:48.253949176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:48.255051 containerd[1593]: time="2026-03-07T00:54:48.255007847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.562412459s" Mar 7 00:54:48.255154 containerd[1593]: time="2026-03-07T00:54:48.255050607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 00:54:48.261638 containerd[1593]: time="2026-03-07T00:54:48.261594795Z" level=info msg="CreateContainer within sandbox \"b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 00:54:48.285844 containerd[1593]: time="2026-03-07T00:54:48.285678283Z" level=info msg="CreateContainer within sandbox \"b6ee2131314b9cce744e80f1c449fd6e76c5a2d52444b67d70e4309c5efbfa5d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"841782d389fb0ee124cb9980e6e1ee0fc4bfdb8722bf52450c51cda1ff436c06\"" Mar 7 00:54:48.286754 containerd[1593]: time="2026-03-07T00:54:48.286381317Z" level=info msg="StartContainer for \"841782d389fb0ee124cb9980e6e1ee0fc4bfdb8722bf52450c51cda1ff436c06\"" Mar 7 00:54:48.354067 containerd[1593]: time="2026-03-07T00:54:48.353854740Z" level=info msg="StartContainer for \"841782d389fb0ee124cb9980e6e1ee0fc4bfdb8722bf52450c51cda1ff436c06\" returns successfully" Mar 7 00:54:48.427665 systemd-networkd[1246]: vxlan.calico: Gained IPv6LL Mar 7 00:54:48.944563 kubelet[2769]: I0307 00:54:48.943087 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6766b787fc-z7zlc" podStartSLOduration=2.274793642 podStartE2EDuration="6.94305633s" podCreationTimestamp="2026-03-07 00:54:42 +0000 UTC" firstStartedPulling="2026-03-07 00:54:43.588373266 +0000 UTC m=+36.097936766" lastFinishedPulling="2026-03-07 00:54:48.256635994 +0000 UTC m=+40.766199454" observedRunningTime="2026-03-07 00:54:48.937482734 +0000 UTC m=+41.447046234" watchObservedRunningTime="2026-03-07 00:54:48.94305633 +0000 UTC m=+41.452619830" Mar 7 00:54:51.633554 containerd[1593]: time="2026-03-07T00:54:51.633412928Z" level=info msg="StopPodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\"" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.723 [INFO][4587] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.724 [INFO][4587] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" iface="eth0" netns="/var/run/netns/cni-ece4aa5b-5aa7-f946-6d6b-b0cc6a923624" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.726 [INFO][4587] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" iface="eth0" netns="/var/run/netns/cni-ece4aa5b-5aa7-f946-6d6b-b0cc6a923624" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.726 [INFO][4587] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" iface="eth0" netns="/var/run/netns/cni-ece4aa5b-5aa7-f946-6d6b-b0cc6a923624" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.726 [INFO][4587] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.726 [INFO][4587] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.760 [INFO][4603] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.761 [INFO][4603] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.761 [INFO][4603] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.774 [WARNING][4603] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.774 [INFO][4603] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.777 [INFO][4603] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:51.782784 containerd[1593]: 2026-03-07 00:54:51.781 [INFO][4587] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:54:51.786595 containerd[1593]: time="2026-03-07T00:54:51.785283897Z" level=info msg="TearDown network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" successfully" Mar 7 00:54:51.786595 containerd[1593]: time="2026-03-07T00:54:51.785322417Z" level=info msg="StopPodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" returns successfully" Mar 7 00:54:51.789605 systemd[1]: run-netns-cni\x2dece4aa5b\x2d5aa7\x2df946\x2d6d6b\x2db0cc6a923624.mount: Deactivated successfully. Mar 7 00:54:51.791767 containerd[1593]: time="2026-03-07T00:54:51.791129989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-rjnrl,Uid:c378d9fe-bc41-4a89-93f8-75b75a40f945,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:51.984024 systemd-networkd[1246]: calid4c20da325a: Link UP Mar 7 00:54:51.985250 systemd-networkd[1246]: calid4c20da325a: Gained carrier Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.888 [INFO][4617] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0 calico-apiserver-7696fc9784- calico-system c378d9fe-bc41-4a89-93f8-75b75a40f945 947 0 2026-03-07 00:54:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7696fc9784 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 calico-apiserver-7696fc9784-rjnrl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid4c20da325a [] [] }} ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.888 [INFO][4617] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.918 [INFO][4630] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" HandleID="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.930 [INFO][4630] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" HandleID="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002732f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"calico-apiserver-7696fc9784-rjnrl", "timestamp":"2026-03-07 00:54:51.918253154 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400037af20)} Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.930 [INFO][4630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.931 [INFO][4630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.931 [INFO][4630] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.934 [INFO][4630] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.942 [INFO][4630] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.948 [INFO][4630] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.951 [INFO][4630] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.954 [INFO][4630] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.954 [INFO][4630] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.957 [INFO][4630] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.964 [INFO][4630] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.971 [INFO][4630] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.131/26] block=192.168.51.128/26 handle="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.972 [INFO][4630] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.131/26] handle="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.972 [INFO][4630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:52.002159 containerd[1593]: 2026-03-07 00:54:51.972 [INFO][4630] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.131/26] IPv6=[] ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" HandleID="k8s-pod-network.346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.004299 containerd[1593]: 2026-03-07 00:54:51.974 [INFO][4617] cni-plugin/k8s.go 418: Populated endpoint ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"c378d9fe-bc41-4a89-93f8-75b75a40f945", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"calico-apiserver-7696fc9784-rjnrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid4c20da325a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:52.004299 containerd[1593]: 2026-03-07 00:54:51.975 [INFO][4617] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.131/32] ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.004299 containerd[1593]: 2026-03-07 00:54:51.975 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4c20da325a ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.004299 containerd[1593]: 2026-03-07 00:54:51.986 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.004299 containerd[1593]: 2026-03-07 00:54:51.987 [INFO][4617] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"c378d9fe-bc41-4a89-93f8-75b75a40f945", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c", Pod:"calico-apiserver-7696fc9784-rjnrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid4c20da325a", MAC:"ae:2b:e3:c0:90:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:52.004299 containerd[1593]: 2026-03-07 00:54:51.998 [INFO][4617] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-rjnrl" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:54:52.044461 containerd[1593]: time="2026-03-07T00:54:52.044349007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:52.044461 containerd[1593]: time="2026-03-07T00:54:52.044424607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:52.045594 containerd[1593]: time="2026-03-07T00:54:52.044716886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:52.045964 containerd[1593]: time="2026-03-07T00:54:52.045771962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:52.129567 containerd[1593]: time="2026-03-07T00:54:52.129490695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-rjnrl,Uid:c378d9fe-bc41-4a89-93f8-75b75a40f945,Namespace:calico-system,Attempt:1,} returns sandbox id \"346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c\"" Mar 7 00:54:52.131615 containerd[1593]: time="2026-03-07T00:54:52.131560728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:54:53.475811 systemd-networkd[1246]: calid4c20da325a: Gained IPv6LL Mar 7 00:54:53.634839 containerd[1593]: time="2026-03-07T00:54:53.632894258Z" level=info msg="StopPodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\"" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.726 [INFO][4717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.726 [INFO][4717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" iface="eth0" netns="/var/run/netns/cni-80679e90-0400-dccd-3632-d56b0c503073" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.727 [INFO][4717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" iface="eth0" netns="/var/run/netns/cni-80679e90-0400-dccd-3632-d56b0c503073" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.727 [INFO][4717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" iface="eth0" netns="/var/run/netns/cni-80679e90-0400-dccd-3632-d56b0c503073" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.727 [INFO][4717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.727 [INFO][4717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.768 [INFO][4729] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.769 [INFO][4729] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.769 [INFO][4729] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.786 [WARNING][4729] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.786 [INFO][4729] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.788 [INFO][4729] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:53.795932 containerd[1593]: 2026-03-07 00:54:53.791 [INFO][4717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:54:53.797936 containerd[1593]: time="2026-03-07T00:54:53.796686502Z" level=info msg="TearDown network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" successfully" Mar 7 00:54:53.797936 containerd[1593]: time="2026-03-07T00:54:53.796730861Z" level=info msg="StopPodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" returns successfully" Mar 7 00:54:53.800420 systemd[1]: run-netns-cni\x2d80679e90\x2d0400\x2ddccd\x2d3632\x2dd56b0c503073.mount: Deactivated successfully. Mar 7 00:54:53.808371 containerd[1593]: time="2026-03-07T00:54:53.808020791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8l2rp,Uid:65d62761-8ce6-4210-8674-c27a3da452ab,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:53.994299 systemd-networkd[1246]: cali3bfecd80b59: Link UP Mar 7 00:54:53.995346 systemd-networkd[1246]: cali3bfecd80b59: Gained carrier Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.876 [INFO][4739] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0 goldmane-5b85766d88- calico-system 65d62761-8ce6-4210-8674-c27a3da452ab 956 0 2026-03-07 00:54:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 goldmane-5b85766d88-8l2rp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3bfecd80b59 [] [] }} ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.876 [INFO][4739] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.916 [INFO][4748] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" HandleID="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.930 [INFO][4748] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" HandleID="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"goldmane-5b85766d88-8l2rp", "timestamp":"2026-03-07 00:54:53.916046423 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d9ce0)} Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.930 [INFO][4748] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.930 [INFO][4748] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.930 [INFO][4748] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.935 [INFO][4748] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.942 [INFO][4748] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.957 [INFO][4748] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.961 [INFO][4748] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.965 [INFO][4748] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.965 [INFO][4748] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.967 [INFO][4748] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532 Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.973 [INFO][4748] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.983 [INFO][4748] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.132/26] block=192.168.51.128/26 handle="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.983 [INFO][4748] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.132/26] handle="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.983 [INFO][4748] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:54.022220 containerd[1593]: 2026-03-07 00:54:53.984 [INFO][4748] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.132/26] IPv6=[] ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" HandleID="k8s-pod-network.717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.022805 containerd[1593]: 2026-03-07 00:54:53.989 [INFO][4739] cni-plugin/k8s.go 418: Populated endpoint ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"65d62761-8ce6-4210-8674-c27a3da452ab", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"goldmane-5b85766d88-8l2rp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3bfecd80b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:54.022805 containerd[1593]: 2026-03-07 00:54:53.989 [INFO][4739] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.132/32] ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.022805 containerd[1593]: 2026-03-07 00:54:53.989 [INFO][4739] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bfecd80b59 ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.022805 containerd[1593]: 2026-03-07 00:54:53.996 [INFO][4739] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.022805 containerd[1593]: 2026-03-07 00:54:53.997 [INFO][4739] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"65d62761-8ce6-4210-8674-c27a3da452ab", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532", Pod:"goldmane-5b85766d88-8l2rp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3bfecd80b59", MAC:"7a:21:a5:56:ef:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:54.022805 containerd[1593]: 2026-03-07 00:54:54.015 [INFO][4739] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532" Namespace="calico-system" Pod="goldmane-5b85766d88-8l2rp" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:54:54.067810 containerd[1593]: time="2026-03-07T00:54:54.067039003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:54.067810 containerd[1593]: time="2026-03-07T00:54:54.067108603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:54.067810 containerd[1593]: time="2026-03-07T00:54:54.067129843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:54.067810 containerd[1593]: time="2026-03-07T00:54:54.067229923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:54.171783 containerd[1593]: time="2026-03-07T00:54:54.171744625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-8l2rp,Uid:65d62761-8ce6-4210-8674-c27a3da452ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532\"" Mar 7 00:54:54.632462 containerd[1593]: time="2026-03-07T00:54:54.632036320Z" level=info msg="StopPodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\"" Mar 7 00:54:54.668275 containerd[1593]: time="2026-03-07T00:54:54.668219258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:54.670791 containerd[1593]: time="2026-03-07T00:54:54.670752534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 00:54:54.671857 containerd[1593]: time="2026-03-07T00:54:54.671798092Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:54.679783 containerd[1593]: time="2026-03-07T00:54:54.679703918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:54.681512 containerd[1593]: time="2026-03-07T00:54:54.681480995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.549873987s" Mar 7 00:54:54.681929 containerd[1593]: time="2026-03-07T00:54:54.681877595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:54:54.684677 containerd[1593]: time="2026-03-07T00:54:54.684632070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 00:54:54.691629 containerd[1593]: time="2026-03-07T00:54:54.690802539Z" level=info msg="CreateContainer within sandbox \"346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:54:54.711084 containerd[1593]: time="2026-03-07T00:54:54.710983945Z" level=info msg="CreateContainer within sandbox \"346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"acf6bf0654745cb9fc43523aa79d8f8c3617e9f91735c66285d98cbdba98e9a0\"" Mar 7 00:54:54.715743 containerd[1593]: time="2026-03-07T00:54:54.715609177Z" level=info msg="StartContainer for \"acf6bf0654745cb9fc43523aa79d8f8c3617e9f91735c66285d98cbdba98e9a0\"" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.703 [INFO][4826] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.704 [INFO][4826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" iface="eth0" netns="/var/run/netns/cni-ec9a239e-beb5-ec1e-6f7e-2b1e714ebbaf" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.705 [INFO][4826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" iface="eth0" netns="/var/run/netns/cni-ec9a239e-beb5-ec1e-6f7e-2b1e714ebbaf" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.706 [INFO][4826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" iface="eth0" netns="/var/run/netns/cni-ec9a239e-beb5-ec1e-6f7e-2b1e714ebbaf" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.706 [INFO][4826] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.706 [INFO][4826] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.744 [INFO][4837] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.744 [INFO][4837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.744 [INFO][4837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.761 [WARNING][4837] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.763 [INFO][4837] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.767 [INFO][4837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:54.773369 containerd[1593]: 2026-03-07 00:54:54.771 [INFO][4826] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:54:54.773848 containerd[1593]: time="2026-03-07T00:54:54.773627238Z" level=info msg="TearDown network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" successfully" Mar 7 00:54:54.773848 containerd[1593]: time="2026-03-07T00:54:54.773664438Z" level=info msg="StopPodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" returns successfully" Mar 7 00:54:54.779906 containerd[1593]: time="2026-03-07T00:54:54.779487068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swrxf,Uid:fdc0176d-e376-478f-8d38-7a94fccdf9e9,Namespace:kube-system,Attempt:1,}" Mar 7 00:54:54.808300 systemd[1]: run-netns-cni\x2dec9a239e\x2dbeb5\x2dec1e\x2d6f7e\x2d2b1e714ebbaf.mount: Deactivated successfully. Mar 7 00:54:54.816683 containerd[1593]: time="2026-03-07T00:54:54.816598685Z" level=info msg="StartContainer for \"acf6bf0654745cb9fc43523aa79d8f8c3617e9f91735c66285d98cbdba98e9a0\" returns successfully" Mar 7 00:54:54.958300 systemd-networkd[1246]: califf8ce7d8e98: Link UP Mar 7 00:54:54.960728 systemd-networkd[1246]: califf8ce7d8e98: Gained carrier Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.848 [INFO][4875] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0 coredns-674b8bbfcf- kube-system fdc0176d-e376-478f-8d38-7a94fccdf9e9 964 0 2026-03-07 00:54:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 coredns-674b8bbfcf-swrxf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf8ce7d8e98 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.849 [INFO][4875] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.889 [INFO][4890] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" HandleID="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.900 [INFO][4890] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" HandleID="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"coredns-674b8bbfcf-swrxf", "timestamp":"2026-03-07 00:54:54.889437641 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002a6420)} Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.900 [INFO][4890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.900 [INFO][4890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.900 [INFO][4890] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.906 [INFO][4890] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.913 [INFO][4890] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.919 [INFO][4890] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.921 [INFO][4890] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.925 [INFO][4890] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.925 [INFO][4890] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.930 [INFO][4890] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4 Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.936 [INFO][4890] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.947 [INFO][4890] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.133/26] block=192.168.51.128/26 handle="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.947 [INFO][4890] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.133/26] handle="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.947 [INFO][4890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:55.005819 containerd[1593]: 2026-03-07 00:54:54.947 [INFO][4890] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.133/26] IPv6=[] ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" HandleID="k8s-pod-network.d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.006403 containerd[1593]: 2026-03-07 00:54:54.951 [INFO][4875] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fdc0176d-e376-478f-8d38-7a94fccdf9e9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"coredns-674b8bbfcf-swrxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf8ce7d8e98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:55.006403 containerd[1593]: 2026-03-07 00:54:54.952 [INFO][4875] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.133/32] ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.006403 containerd[1593]: 2026-03-07 00:54:54.952 [INFO][4875] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf8ce7d8e98 ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.006403 containerd[1593]: 2026-03-07 00:54:54.959 [INFO][4875] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.006403 containerd[1593]: 2026-03-07 00:54:54.976 [INFO][4875] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fdc0176d-e376-478f-8d38-7a94fccdf9e9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4", Pod:"coredns-674b8bbfcf-swrxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf8ce7d8e98", MAC:"36:aa:85:ce:96:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:55.006403 containerd[1593]: 2026-03-07 00:54:54.996 [INFO][4875] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4" Namespace="kube-system" Pod="coredns-674b8bbfcf-swrxf" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:54:55.010588 kubelet[2769]: I0307 00:54:55.010484 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7696fc9784-rjnrl" podStartSLOduration=26.457474302 podStartE2EDuration="29.010411882s" podCreationTimestamp="2026-03-07 00:54:26 +0000 UTC" firstStartedPulling="2026-03-07 00:54:52.130796051 +0000 UTC m=+44.640359551" lastFinishedPulling="2026-03-07 00:54:54.683733631 +0000 UTC m=+47.193297131" observedRunningTime="2026-03-07 00:54:55.004326727 +0000 UTC m=+47.513890227" watchObservedRunningTime="2026-03-07 00:54:55.010411882 +0000 UTC m=+47.519975382" Mar 7 00:54:55.032572 containerd[1593]: time="2026-03-07T00:54:55.032458985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:55.032572 containerd[1593]: time="2026-03-07T00:54:55.032524745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:55.032779 containerd[1593]: time="2026-03-07T00:54:55.032551825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:55.032779 containerd[1593]: time="2026-03-07T00:54:55.032639945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:55.116204 containerd[1593]: time="2026-03-07T00:54:55.116085641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-swrxf,Uid:fdc0176d-e376-478f-8d38-7a94fccdf9e9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4\"" Mar 7 00:54:55.124729 containerd[1593]: time="2026-03-07T00:54:55.124595594Z" level=info msg="CreateContainer within sandbox \"d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:55.153168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount514624371.mount: Deactivated successfully. Mar 7 00:54:55.158900 containerd[1593]: time="2026-03-07T00:54:55.158794008Z" level=info msg="CreateContainer within sandbox \"d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eff98ef7ff127fc87c152fa7ed1715ffdbea0d19aa828694722ce6203c50d1ce\"" Mar 7 00:54:55.160717 containerd[1593]: time="2026-03-07T00:54:55.160166926Z" level=info msg="StartContainer for \"eff98ef7ff127fc87c152fa7ed1715ffdbea0d19aa828694722ce6203c50d1ce\"" Mar 7 00:54:55.236260 containerd[1593]: time="2026-03-07T00:54:55.236128148Z" level=info msg="StartContainer for \"eff98ef7ff127fc87c152fa7ed1715ffdbea0d19aa828694722ce6203c50d1ce\" returns successfully" Mar 7 00:54:55.628973 containerd[1593]: time="2026-03-07T00:54:55.628653884Z" level=info msg="StopPodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\"" Mar 7 00:54:55.715767 systemd-networkd[1246]: cali3bfecd80b59: Gained IPv6LL Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.722 [INFO][5011] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.729 [INFO][5011] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" iface="eth0" netns="/var/run/netns/cni-fbb437e6-924c-6ee6-48ed-ac9ba9b2a9d9" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.729 [INFO][5011] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" iface="eth0" netns="/var/run/netns/cni-fbb437e6-924c-6ee6-48ed-ac9ba9b2a9d9" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.729 [INFO][5011] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" iface="eth0" netns="/var/run/netns/cni-fbb437e6-924c-6ee6-48ed-ac9ba9b2a9d9" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.729 [INFO][5011] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.729 [INFO][5011] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.774 [INFO][5019] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.774 [INFO][5019] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.774 [INFO][5019] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.786 [WARNING][5019] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.786 [INFO][5019] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.789 [INFO][5019] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:55.795937 containerd[1593]: 2026-03-07 00:54:55.793 [INFO][5011] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:54:55.799089 containerd[1593]: time="2026-03-07T00:54:55.798959872Z" level=info msg="TearDown network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" successfully" Mar 7 00:54:55.799089 containerd[1593]: time="2026-03-07T00:54:55.798994112Z" level=info msg="StopPodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" returns successfully" Mar 7 00:54:55.800005 containerd[1593]: time="2026-03-07T00:54:55.799823991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v77d9,Uid:7edcab55-c19a-4b82-b255-a65ce1665cd4,Namespace:kube-system,Attempt:1,}" Mar 7 00:54:55.806328 systemd[1]: run-netns-cni\x2dfbb437e6\x2d924c\x2d6ee6\x2d48ed\x2dac9ba9b2a9d9.mount: Deactivated successfully. Mar 7 00:54:55.995469 kubelet[2769]: I0307 00:54:55.994492 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:56.046777 kubelet[2769]: I0307 00:54:56.046694 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-swrxf" podStartSLOduration=43.046676801 podStartE2EDuration="43.046676801s" podCreationTimestamp="2026-03-07 00:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:56.017386437 +0000 UTC m=+48.526949977" watchObservedRunningTime="2026-03-07 00:54:56.046676801 +0000 UTC m=+48.556240301" Mar 7 00:54:56.064957 systemd-networkd[1246]: calif3c79a804bd: Link UP Mar 7 00:54:56.072798 systemd-networkd[1246]: calif3c79a804bd: Gained carrier Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.889 [INFO][5029] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0 coredns-674b8bbfcf- kube-system 7edcab55-c19a-4b82-b255-a65ce1665cd4 980 0 2026-03-07 00:54:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 coredns-674b8bbfcf-v77d9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif3c79a804bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.889 [INFO][5029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.932 [INFO][5042] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" HandleID="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.949 [INFO][5042] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" HandleID="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe80), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"coredns-674b8bbfcf-v77d9", "timestamp":"2026-03-07 00:54:55.93082085 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001846e0)} Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.952 [INFO][5042] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.953 [INFO][5042] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.953 [INFO][5042] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.960 [INFO][5042] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.970 [INFO][5042] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.979 [INFO][5042] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.982 [INFO][5042] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.985 [INFO][5042] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.985 [INFO][5042] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.988 [INFO][5042] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87 Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:55.999 [INFO][5042] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:56.016 [INFO][5042] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.134/26] block=192.168.51.128/26 handle="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:56.017 [INFO][5042] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.134/26] handle="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:56.017 [INFO][5042] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:56.106086 containerd[1593]: 2026-03-07 00:54:56.017 [INFO][5042] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.134/26] IPv6=[] ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" HandleID="k8s-pod-network.8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.108542 containerd[1593]: 2026-03-07 00:54:56.025 [INFO][5029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7edcab55-c19a-4b82-b255-a65ce1665cd4", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"coredns-674b8bbfcf-v77d9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3c79a804bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:56.108542 containerd[1593]: 2026-03-07 00:54:56.027 [INFO][5029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.134/32] ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.108542 containerd[1593]: 2026-03-07 00:54:56.027 [INFO][5029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3c79a804bd ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.108542 containerd[1593]: 2026-03-07 00:54:56.080 [INFO][5029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.108542 containerd[1593]: 2026-03-07 00:54:56.081 [INFO][5029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7edcab55-c19a-4b82-b255-a65ce1665cd4", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87", Pod:"coredns-674b8bbfcf-v77d9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3c79a804bd", MAC:"d2:a3:f7:c2:05:05", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:56.108542 containerd[1593]: 2026-03-07 00:54:56.098 [INFO][5029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87" Namespace="kube-system" Pod="coredns-674b8bbfcf-v77d9" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:54:56.200561 containerd[1593]: time="2026-03-07T00:54:56.200066740Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:56.200561 containerd[1593]: time="2026-03-07T00:54:56.200134060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:56.200561 containerd[1593]: time="2026-03-07T00:54:56.200150620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:56.206121 containerd[1593]: time="2026-03-07T00:54:56.205902341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:56.340340 containerd[1593]: time="2026-03-07T00:54:56.340278198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v77d9,Uid:7edcab55-c19a-4b82-b255-a65ce1665cd4,Namespace:kube-system,Attempt:1,} returns sandbox id \"8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87\"" Mar 7 00:54:56.352839 containerd[1593]: time="2026-03-07T00:54:56.352692800Z" level=info msg="CreateContainer within sandbox \"8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:56.385940 containerd[1593]: time="2026-03-07T00:54:56.385742444Z" level=info msg="CreateContainer within sandbox \"8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0001f8cadbc15fb706078dfab4891cb03d55853d8902edef0a96005fbaaea935\"" Mar 7 00:54:56.388482 containerd[1593]: time="2026-03-07T00:54:56.388444044Z" level=info msg="StartContainer for \"0001f8cadbc15fb706078dfab4891cb03d55853d8902edef0a96005fbaaea935\"" Mar 7 00:54:56.525243 containerd[1593]: time="2026-03-07T00:54:56.525192582Z" level=info msg="StartContainer for \"0001f8cadbc15fb706078dfab4891cb03d55853d8902edef0a96005fbaaea935\" returns successfully" Mar 7 00:54:56.547702 systemd-networkd[1246]: califf8ce7d8e98: Gained IPv6LL Mar 7 00:54:56.629064 containerd[1593]: time="2026-03-07T00:54:56.628321315Z" level=info msg="StopPodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\"" Mar 7 00:54:56.629064 containerd[1593]: time="2026-03-07T00:54:56.628476515Z" level=info msg="StopPodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\"" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.739 [INFO][5174] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.741 [INFO][5174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" iface="eth0" netns="/var/run/netns/cni-701ec25e-a1d1-a60f-5c5b-70a5c6ffe9f7" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.741 [INFO][5174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" iface="eth0" netns="/var/run/netns/cni-701ec25e-a1d1-a60f-5c5b-70a5c6ffe9f7" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.741 [INFO][5174] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" iface="eth0" netns="/var/run/netns/cni-701ec25e-a1d1-a60f-5c5b-70a5c6ffe9f7" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.741 [INFO][5174] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.742 [INFO][5174] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.816 [INFO][5191] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.816 [INFO][5191] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.816 [INFO][5191] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.834 [WARNING][5191] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.834 [INFO][5191] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.839 [INFO][5191] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:56.855070 containerd[1593]: 2026-03-07 00:54:56.851 [INFO][5174] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:54:56.859919 systemd[1]: run-netns-cni\x2d701ec25e\x2da1d1\x2da60f\x2d5c5b\x2d70a5c6ffe9f7.mount: Deactivated successfully. Mar 7 00:54:56.871471 containerd[1593]: time="2026-03-07T00:54:56.870712146Z" level=info msg="TearDown network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" successfully" Mar 7 00:54:56.871471 containerd[1593]: time="2026-03-07T00:54:56.870745546Z" level=info msg="StopPodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" returns successfully" Mar 7 00:54:56.873645 containerd[1593]: time="2026-03-07T00:54:56.873494387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9747f7f-9p8gb,Uid:9286d8c3-e0b1-413a-999e-690d6a9c7331,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.819 [INFO][5178] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.824 [INFO][5178] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" iface="eth0" netns="/var/run/netns/cni-961872d3-b7e5-2ff9-f0db-591213097b29" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.824 [INFO][5178] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" iface="eth0" netns="/var/run/netns/cni-961872d3-b7e5-2ff9-f0db-591213097b29" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.824 [INFO][5178] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" iface="eth0" netns="/var/run/netns/cni-961872d3-b7e5-2ff9-f0db-591213097b29" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.824 [INFO][5178] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.824 [INFO][5178] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.915 [INFO][5205] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.915 [INFO][5205] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.916 [INFO][5205] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.936 [WARNING][5205] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.936 [INFO][5205] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.938 [INFO][5205] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:56.954461 containerd[1593]: 2026-03-07 00:54:56.945 [INFO][5178] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:54:56.954461 containerd[1593]: time="2026-03-07T00:54:56.951955997Z" level=info msg="TearDown network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" successfully" Mar 7 00:54:56.954461 containerd[1593]: time="2026-03-07T00:54:56.951983837Z" level=info msg="StopPodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" returns successfully" Mar 7 00:54:56.954461 containerd[1593]: time="2026-03-07T00:54:56.954333157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-bpxl5,Uid:a347d5f2-52eb-4c7f-89b6-af577611cb2a,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:56.959667 systemd[1]: run-netns-cni\x2d961872d3\x2db7e5\x2d2ff9\x2df0db\x2d591213097b29.mount: Deactivated successfully. Mar 7 00:54:57.093442 kubelet[2769]: I0307 00:54:57.093087 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v77d9" podStartSLOduration=44.093069295 podStartE2EDuration="44.093069295s" podCreationTimestamp="2026-03-07 00:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:57.066468228 +0000 UTC m=+49.576031728" watchObservedRunningTime="2026-03-07 00:54:57.093069295 +0000 UTC m=+49.602632795" Mar 7 00:54:57.188722 systemd-networkd[1246]: calif3c79a804bd: Gained IPv6LL Mar 7 00:54:57.219819 systemd-networkd[1246]: cali6aea22dd992: Link UP Mar 7 00:54:57.221779 systemd-networkd[1246]: cali6aea22dd992: Gained carrier Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.000 [INFO][5215] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0 calico-kube-controllers-6d9747f7f- calico-system 9286d8c3-e0b1-413a-999e-690d6a9c7331 999 0 2026-03-07 00:54:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d9747f7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 calico-kube-controllers-6d9747f7f-9p8gb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6aea22dd992 [] [] }} ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.000 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.082 [INFO][5234] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" HandleID="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.125 [INFO][5234] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" HandleID="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380180), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"calico-kube-controllers-6d9747f7f-9p8gb", "timestamp":"2026-03-07 00:54:57.082502204 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184dc0)} Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.125 [INFO][5234] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.126 [INFO][5234] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.126 [INFO][5234] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.131 [INFO][5234] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.142 [INFO][5234] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.153 [INFO][5234] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.158 [INFO][5234] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.164 [INFO][5234] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.164 [INFO][5234] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.169 [INFO][5234] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09 Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.178 [INFO][5234] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.192 [INFO][5234] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.135/26] block=192.168.51.128/26 handle="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.192 [INFO][5234] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.135/26] handle="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.192 [INFO][5234] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:57.257591 containerd[1593]: 2026-03-07 00:54:57.192 [INFO][5234] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.135/26] IPv6=[] ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" HandleID="k8s-pod-network.d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.259945 containerd[1593]: 2026-03-07 00:54:57.204 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0", GenerateName:"calico-kube-controllers-6d9747f7f-", Namespace:"calico-system", SelfLink:"", UID:"9286d8c3-e0b1-413a-999e-690d6a9c7331", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9747f7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"calico-kube-controllers-6d9747f7f-9p8gb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6aea22dd992", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:57.259945 containerd[1593]: 2026-03-07 00:54:57.204 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.135/32] ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.259945 containerd[1593]: 2026-03-07 00:54:57.205 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aea22dd992 ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.259945 containerd[1593]: 2026-03-07 00:54:57.223 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.259945 containerd[1593]: 2026-03-07 00:54:57.232 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0", GenerateName:"calico-kube-controllers-6d9747f7f-", Namespace:"calico-system", SelfLink:"", UID:"9286d8c3-e0b1-413a-999e-690d6a9c7331", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9747f7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09", Pod:"calico-kube-controllers-6d9747f7f-9p8gb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6aea22dd992", MAC:"ca:9d:e7:98:6e:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:57.259945 containerd[1593]: 2026-03-07 00:54:57.245 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09" Namespace="calico-system" Pod="calico-kube-controllers-6d9747f7f-9p8gb" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:54:57.360662 systemd-networkd[1246]: caliddbe4888423: Link UP Mar 7 00:54:57.362476 systemd-networkd[1246]: caliddbe4888423: Gained carrier Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.114 [INFO][5232] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0 calico-apiserver-7696fc9784- calico-system a347d5f2-52eb-4c7f-89b6-af577611cb2a 1000 0 2026-03-07 00:54:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7696fc9784 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-4bed64c074 calico-apiserver-7696fc9784-bpxl5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliddbe4888423 [] [] }} ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.114 [INFO][5232] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.205 [INFO][5248] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" HandleID="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.236 [INFO][5248] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" HandleID="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000123e90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-4bed64c074", "pod":"calico-apiserver-7696fc9784-bpxl5", "timestamp":"2026-03-07 00:54:57.205203647 +0000 UTC"}, Hostname:"ci-4081-3-6-n-4bed64c074", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000352f20)} Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.236 [INFO][5248] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.237 [INFO][5248] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.237 [INFO][5248] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-4bed64c074' Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.244 [INFO][5248] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.269 [INFO][5248] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.284 [INFO][5248] ipam/ipam.go 526: Trying affinity for 192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.293 [INFO][5248] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.298 [INFO][5248] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.128/26 host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.298 [INFO][5248] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.128/26 handle="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.300 [INFO][5248] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334 Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.311 [INFO][5248] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.128/26 handle="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.329 [INFO][5248] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.136/26] block=192.168.51.128/26 handle="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.330 [INFO][5248] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.136/26] handle="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" host="ci-4081-3-6-n-4bed64c074" Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.333 [INFO][5248] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:57.389432 containerd[1593]: 2026-03-07 00:54:57.333 [INFO][5248] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.136/26] IPv6=[] ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" HandleID="k8s-pod-network.003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.391586 containerd[1593]: 2026-03-07 00:54:57.340 [INFO][5232] cni-plugin/k8s.go 418: Populated endpoint ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"a347d5f2-52eb-4c7f-89b6-af577611cb2a", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"", Pod:"calico-apiserver-7696fc9784-bpxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddbe4888423", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:57.391586 containerd[1593]: 2026-03-07 00:54:57.340 [INFO][5232] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.136/32] ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.391586 containerd[1593]: 2026-03-07 00:54:57.340 [INFO][5232] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliddbe4888423 ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.391586 containerd[1593]: 2026-03-07 00:54:57.363 [INFO][5232] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.391586 containerd[1593]: 2026-03-07 00:54:57.363 [INFO][5232] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"a347d5f2-52eb-4c7f-89b6-af577611cb2a", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334", Pod:"calico-apiserver-7696fc9784-bpxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddbe4888423", MAC:"12:bf:e1:a9:e1:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:57.391586 containerd[1593]: 2026-03-07 00:54:57.379 [INFO][5232] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334" Namespace="calico-system" Pod="calico-apiserver-7696fc9784-bpxl5" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:54:57.401369 containerd[1593]: time="2026-03-07T00:54:57.401220724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:57.401369 containerd[1593]: time="2026-03-07T00:54:57.401288764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:57.401369 containerd[1593]: time="2026-03-07T00:54:57.401303764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:57.401793 containerd[1593]: time="2026-03-07T00:54:57.401388884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:57.423176 containerd[1593]: time="2026-03-07T00:54:57.422709945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:57.423176 containerd[1593]: time="2026-03-07T00:54:57.422785305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:57.423176 containerd[1593]: time="2026-03-07T00:54:57.422800425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:57.423176 containerd[1593]: time="2026-03-07T00:54:57.422945985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:57.581501 containerd[1593]: time="2026-03-07T00:54:57.581333384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9747f7f-9p8gb,Uid:9286d8c3-e0b1-413a-999e-690d6a9c7331,Namespace:calico-system,Attempt:1,} returns sandbox id \"d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09\"" Mar 7 00:54:57.594499 containerd[1593]: time="2026-03-07T00:54:57.594198277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7696fc9784-bpxl5,Uid:a347d5f2-52eb-4c7f-89b6-af577611cb2a,Namespace:calico-system,Attempt:1,} returns sandbox id \"003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334\"" Mar 7 00:54:57.602427 containerd[1593]: time="2026-03-07T00:54:57.602281125Z" level=info msg="CreateContainer within sandbox \"003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:54:57.618961 containerd[1593]: time="2026-03-07T00:54:57.618909262Z" level=info msg="CreateContainer within sandbox \"003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b7d19f2761ac2a2efbdb9426208bfc6455e66ffb5334e9a2bd1e2e4fa090fd4\"" Mar 7 00:54:57.619729 containerd[1593]: time="2026-03-07T00:54:57.619681983Z" level=info msg="StartContainer for \"8b7d19f2761ac2a2efbdb9426208bfc6455e66ffb5334e9a2bd1e2e4fa090fd4\"" Mar 7 00:54:57.810183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652901479.mount: Deactivated successfully. Mar 7 00:54:57.816009 containerd[1593]: time="2026-03-07T00:54:57.815853419Z" level=info msg="StartContainer for \"8b7d19f2761ac2a2efbdb9426208bfc6455e66ffb5334e9a2bd1e2e4fa090fd4\" returns successfully" Mar 7 00:54:58.228756 containerd[1593]: time="2026-03-07T00:54:58.228678745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:58.233183 containerd[1593]: time="2026-03-07T00:54:58.233145394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 00:54:58.235247 containerd[1593]: time="2026-03-07T00:54:58.235184237Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:58.239792 containerd[1593]: time="2026-03-07T00:54:58.238786244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:58.239944 containerd[1593]: time="2026-03-07T00:54:58.239718926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.554455257s" Mar 7 00:54:58.240032 containerd[1593]: time="2026-03-07T00:54:58.240002806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 00:54:58.241634 containerd[1593]: time="2026-03-07T00:54:58.241605609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 00:54:58.248094 containerd[1593]: time="2026-03-07T00:54:58.248053461Z" level=info msg="CreateContainer within sandbox \"717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 00:54:58.274503 containerd[1593]: time="2026-03-07T00:54:58.274461310Z" level=info msg="CreateContainer within sandbox \"717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2d7b5513ca3cfccc15e950fc691e938a9f2f8e76f94fd4665666b7bfb63e0de2\"" Mar 7 00:54:58.275511 containerd[1593]: time="2026-03-07T00:54:58.275476872Z" level=info msg="StartContainer for \"2d7b5513ca3cfccc15e950fc691e938a9f2f8e76f94fd4665666b7bfb63e0de2\"" Mar 7 00:54:58.391629 containerd[1593]: time="2026-03-07T00:54:58.390211444Z" level=info msg="StartContainer for \"2d7b5513ca3cfccc15e950fc691e938a9f2f8e76f94fd4665666b7bfb63e0de2\" returns successfully" Mar 7 00:54:58.531940 systemd-networkd[1246]: cali6aea22dd992: Gained IPv6LL Mar 7 00:54:58.659768 systemd-networkd[1246]: caliddbe4888423: Gained IPv6LL Mar 7 00:54:59.064213 kubelet[2769]: I0307 00:54:59.064178 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:59.093384 kubelet[2769]: I0307 00:54:59.093313 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7696fc9784-bpxl5" podStartSLOduration=33.093295779 podStartE2EDuration="33.093295779s" podCreationTimestamp="2026-03-07 00:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:58.079229309 +0000 UTC m=+50.588792809" watchObservedRunningTime="2026-03-07 00:54:59.093295779 +0000 UTC m=+51.602859279" Mar 7 00:54:59.093569 kubelet[2769]: I0307 00:54:59.093503 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-8l2rp" podStartSLOduration=29.03312342 podStartE2EDuration="33.09349634s" podCreationTimestamp="2026-03-07 00:54:26 +0000 UTC" firstStartedPulling="2026-03-07 00:54:54.181025689 +0000 UTC m=+46.690589189" lastFinishedPulling="2026-03-07 00:54:58.241398609 +0000 UTC m=+50.750962109" observedRunningTime="2026-03-07 00:54:59.086515401 +0000 UTC m=+51.596078941" watchObservedRunningTime="2026-03-07 00:54:59.09349634 +0000 UTC m=+51.603059880" Mar 7 00:55:00.616692 containerd[1593]: time="2026-03-07T00:55:00.616607175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:00.618125 containerd[1593]: time="2026-03-07T00:55:00.618065700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 00:55:00.618971 containerd[1593]: time="2026-03-07T00:55:00.618909383Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:00.622486 containerd[1593]: time="2026-03-07T00:55:00.622420435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:00.623743 containerd[1593]: time="2026-03-07T00:55:00.623687679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.38189623s" Mar 7 00:55:00.623743 containerd[1593]: time="2026-03-07T00:55:00.623728999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 00:55:00.645259 containerd[1593]: time="2026-03-07T00:55:00.645200074Z" level=info msg="CreateContainer within sandbox \"d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 00:55:00.663342 containerd[1593]: time="2026-03-07T00:55:00.662769935Z" level=info msg="CreateContainer within sandbox \"d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d45eee5cb6c69ce4db702a3734462f873e927fa05650c1bc2d1f55310745bf9d\"" Mar 7 00:55:00.664971 containerd[1593]: time="2026-03-07T00:55:00.664928182Z" level=info msg="StartContainer for \"d45eee5cb6c69ce4db702a3734462f873e927fa05650c1bc2d1f55310745bf9d\"" Mar 7 00:55:00.756416 containerd[1593]: time="2026-03-07T00:55:00.756235298Z" level=info msg="StartContainer for \"d45eee5cb6c69ce4db702a3734462f873e927fa05650c1bc2d1f55310745bf9d\" returns successfully" Mar 7 00:55:01.093946 kubelet[2769]: I0307 00:55:01.093208 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d9747f7f-9p8gb" podStartSLOduration=30.052177161 podStartE2EDuration="33.093191297s" podCreationTimestamp="2026-03-07 00:54:28 +0000 UTC" firstStartedPulling="2026-03-07 00:54:57.583835947 +0000 UTC m=+50.093399447" lastFinishedPulling="2026-03-07 00:55:00.624850043 +0000 UTC m=+53.134413583" observedRunningTime="2026-03-07 00:55:01.092192053 +0000 UTC m=+53.601755553" watchObservedRunningTime="2026-03-07 00:55:01.093191297 +0000 UTC m=+53.602754797" Mar 7 00:55:07.612238 containerd[1593]: time="2026-03-07T00:55:07.611929151Z" level=info msg="StopPodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\"" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.662 [WARNING][5646] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fdc0176d-e376-478f-8d38-7a94fccdf9e9", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4", Pod:"coredns-674b8bbfcf-swrxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf8ce7d8e98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.662 [INFO][5646] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.662 [INFO][5646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" iface="eth0" netns="" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.662 [INFO][5646] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.662 [INFO][5646] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.687 [INFO][5656] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.687 [INFO][5656] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.689 [INFO][5656] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.711 [WARNING][5656] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.711 [INFO][5656] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.714 [INFO][5656] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:07.723998 containerd[1593]: 2026-03-07 00:55:07.718 [INFO][5646] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.723998 containerd[1593]: time="2026-03-07T00:55:07.723889849Z" level=info msg="TearDown network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" successfully" Mar 7 00:55:07.723998 containerd[1593]: time="2026-03-07T00:55:07.723913409Z" level=info msg="StopPodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" returns successfully" Mar 7 00:55:07.723998 containerd[1593]: time="2026-03-07T00:55:07.724434093Z" level=info msg="RemovePodSandbox for \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\"" Mar 7 00:55:07.723998 containerd[1593]: time="2026-03-07T00:55:07.724465454Z" level=info msg="Forcibly stopping sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\"" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.768 [WARNING][5671] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fdc0176d-e376-478f-8d38-7a94fccdf9e9", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"d3472dcd70129c5776b61e9cfd8955ccea201961d375616b0834edeb7267efe4", Pod:"coredns-674b8bbfcf-swrxf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf8ce7d8e98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.768 [INFO][5671] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.768 [INFO][5671] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" iface="eth0" netns="" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.769 [INFO][5671] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.769 [INFO][5671] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.791 [INFO][5679] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.791 [INFO][5679] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.791 [INFO][5679] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.808 [WARNING][5679] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.808 [INFO][5679] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" HandleID="k8s-pod-network.4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--swrxf-eth0" Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.811 [INFO][5679] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:07.816377 containerd[1593]: 2026-03-07 00:55:07.813 [INFO][5671] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a" Mar 7 00:55:07.817648 containerd[1593]: time="2026-03-07T00:55:07.816420184Z" level=info msg="TearDown network for sandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" successfully" Mar 7 00:55:07.832310 containerd[1593]: time="2026-03-07T00:55:07.832212036Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:07.832488 containerd[1593]: time="2026-03-07T00:55:07.832328757Z" level=info msg="RemovePodSandbox \"4bacb54dfda76b72625b37bfec62adfefda0b295fbfd2e2700fc1c64a9d2ec9a\" returns successfully" Mar 7 00:55:07.833116 containerd[1593]: time="2026-03-07T00:55:07.833050123Z" level=info msg="StopPodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\"" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.880 [WARNING][5693] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0", GenerateName:"calico-kube-controllers-6d9747f7f-", Namespace:"calico-system", SelfLink:"", UID:"9286d8c3-e0b1-413a-999e-690d6a9c7331", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9747f7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09", Pod:"calico-kube-controllers-6d9747f7f-9p8gb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6aea22dd992", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.880 [INFO][5693] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.880 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" iface="eth0" netns="" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.880 [INFO][5693] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.880 [INFO][5693] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.907 [INFO][5700] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.907 [INFO][5700] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.907 [INFO][5700] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.918 [WARNING][5700] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.919 [INFO][5700] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.921 [INFO][5700] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:07.926740 containerd[1593]: 2026-03-07 00:55:07.923 [INFO][5693] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:07.928960 containerd[1593]: time="2026-03-07T00:55:07.927683076Z" level=info msg="TearDown network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" successfully" Mar 7 00:55:07.928960 containerd[1593]: time="2026-03-07T00:55:07.927729796Z" level=info msg="StopPodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" returns successfully" Mar 7 00:55:07.928960 containerd[1593]: time="2026-03-07T00:55:07.928336161Z" level=info msg="RemovePodSandbox for \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\"" Mar 7 00:55:07.928960 containerd[1593]: time="2026-03-07T00:55:07.928370801Z" level=info msg="Forcibly stopping sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\"" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:07.975 [WARNING][5714] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0", GenerateName:"calico-kube-controllers-6d9747f7f-", Namespace:"calico-system", SelfLink:"", UID:"9286d8c3-e0b1-413a-999e-690d6a9c7331", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9747f7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"d5698d67b444251b3e68959be9eb603aae070983c8e814d696bc6398e8996d09", Pod:"calico-kube-controllers-6d9747f7f-9p8gb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6aea22dd992", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:07.975 [INFO][5714] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:07.975 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" iface="eth0" netns="" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:07.975 [INFO][5714] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:07.975 [INFO][5714] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.003 [INFO][5721] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.004 [INFO][5721] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.004 [INFO][5721] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.019 [WARNING][5721] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.019 [INFO][5721] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" HandleID="k8s-pod-network.6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--kube--controllers--6d9747f7f--9p8gb-eth0" Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.022 [INFO][5721] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.032756 containerd[1593]: 2026-03-07 00:55:08.025 [INFO][5714] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a" Mar 7 00:55:08.032756 containerd[1593]: time="2026-03-07T00:55:08.032486172Z" level=info msg="TearDown network for sandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" successfully" Mar 7 00:55:08.044354 containerd[1593]: time="2026-03-07T00:55:08.044301358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:08.044937 containerd[1593]: time="2026-03-07T00:55:08.044605321Z" level=info msg="RemovePodSandbox \"6d986ed0d28c59a98f5be92fb271be58a6ae689cc89c603e04f8cce1f7a9e06a\" returns successfully" Mar 7 00:55:08.046165 containerd[1593]: time="2026-03-07T00:55:08.046123095Z" level=info msg="StopPodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\"" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.101 [WARNING][5736] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"c378d9fe-bc41-4a89-93f8-75b75a40f945", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c", Pod:"calico-apiserver-7696fc9784-rjnrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid4c20da325a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.102 [INFO][5736] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.102 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" iface="eth0" netns="" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.102 [INFO][5736] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.102 [INFO][5736] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.124 [INFO][5744] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.124 [INFO][5744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.124 [INFO][5744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.136 [WARNING][5744] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.136 [INFO][5744] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.138 [INFO][5744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.143334 containerd[1593]: 2026-03-07 00:55:08.140 [INFO][5736] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.143334 containerd[1593]: time="2026-03-07T00:55:08.143188648Z" level=info msg="TearDown network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" successfully" Mar 7 00:55:08.143334 containerd[1593]: time="2026-03-07T00:55:08.143222408Z" level=info msg="StopPodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" returns successfully" Mar 7 00:55:08.145143 containerd[1593]: time="2026-03-07T00:55:08.144760062Z" level=info msg="RemovePodSandbox for \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\"" Mar 7 00:55:08.145143 containerd[1593]: time="2026-03-07T00:55:08.144815982Z" level=info msg="Forcibly stopping sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\"" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.192 [WARNING][5758] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"c378d9fe-bc41-4a89-93f8-75b75a40f945", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"346306cad6f47f424a0698181c4b133a14748b40dea5936b45e5ecdfc2503e9c", Pod:"calico-apiserver-7696fc9784-rjnrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid4c20da325a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.193 [INFO][5758] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.193 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" iface="eth0" netns="" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.193 [INFO][5758] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.193 [INFO][5758] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.215 [INFO][5765] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.216 [INFO][5765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.216 [INFO][5765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.227 [WARNING][5765] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.227 [INFO][5765] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" HandleID="k8s-pod-network.3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--rjnrl-eth0" Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.230 [INFO][5765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.236574 containerd[1593]: 2026-03-07 00:55:08.233 [INFO][5758] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c" Mar 7 00:55:08.236574 containerd[1593]: time="2026-03-07T00:55:08.236451006Z" level=info msg="TearDown network for sandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" successfully" Mar 7 00:55:08.240990 containerd[1593]: time="2026-03-07T00:55:08.240928486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:08.241114 containerd[1593]: time="2026-03-07T00:55:08.241046847Z" level=info msg="RemovePodSandbox \"3488a73341eae6c59d5031e848fc1363c89500db6ca9497a45c08225c89d8b7c\" returns successfully" Mar 7 00:55:08.241717 containerd[1593]: time="2026-03-07T00:55:08.241667933Z" level=info msg="StopPodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\"" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.287 [WARNING][5779] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7edcab55-c19a-4b82-b255-a65ce1665cd4", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87", Pod:"coredns-674b8bbfcf-v77d9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3c79a804bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.287 [INFO][5779] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.287 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" iface="eth0" netns="" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.287 [INFO][5779] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.287 [INFO][5779] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.316 [INFO][5786] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.316 [INFO][5786] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.316 [INFO][5786] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.328 [WARNING][5786] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.328 [INFO][5786] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.330 [INFO][5786] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.335897 containerd[1593]: 2026-03-07 00:55:08.333 [INFO][5779] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.335897 containerd[1593]: time="2026-03-07T00:55:08.335765219Z" level=info msg="TearDown network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" successfully" Mar 7 00:55:08.337379 containerd[1593]: time="2026-03-07T00:55:08.335795019Z" level=info msg="StopPodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" returns successfully" Mar 7 00:55:08.338116 containerd[1593]: time="2026-03-07T00:55:08.338064280Z" level=info msg="RemovePodSandbox for \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\"" Mar 7 00:55:08.338116 containerd[1593]: time="2026-03-07T00:55:08.338101480Z" level=info msg="Forcibly stopping sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\"" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.387 [WARNING][5800] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7edcab55-c19a-4b82-b255-a65ce1665cd4", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"8a173bea5983a6b50651b2b860067daa01f90f626f66c39817947eeea19bfd87", Pod:"coredns-674b8bbfcf-v77d9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif3c79a804bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.389 [INFO][5800] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.389 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" iface="eth0" netns="" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.389 [INFO][5800] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.389 [INFO][5800] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.411 [INFO][5807] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.412 [INFO][5807] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.412 [INFO][5807] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.422 [WARNING][5807] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.422 [INFO][5807] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" HandleID="k8s-pod-network.35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Workload="ci--4081--3--6--n--4bed64c074-k8s-coredns--674b8bbfcf--v77d9-eth0" Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.424 [INFO][5807] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.428751 containerd[1593]: 2026-03-07 00:55:08.426 [INFO][5800] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696" Mar 7 00:55:08.429262 containerd[1593]: time="2026-03-07T00:55:08.428750375Z" level=info msg="TearDown network for sandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" successfully" Mar 7 00:55:08.434051 containerd[1593]: time="2026-03-07T00:55:08.433999582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:08.435454 containerd[1593]: time="2026-03-07T00:55:08.434083743Z" level=info msg="RemovePodSandbox \"35f3c0516c65429f865137c7a5d3ca4801d1145f0778dcfcb4b47fbacc87c696\" returns successfully" Mar 7 00:55:08.435454 containerd[1593]: time="2026-03-07T00:55:08.434879670Z" level=info msg="StopPodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\"" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.483 [WARNING][5821] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.484 [INFO][5821] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.484 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" iface="eth0" netns="" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.484 [INFO][5821] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.484 [INFO][5821] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.507 [INFO][5829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.507 [INFO][5829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.507 [INFO][5829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.520 [WARNING][5829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.520 [INFO][5829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.523 [INFO][5829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.527775 containerd[1593]: 2026-03-07 00:55:08.525 [INFO][5821] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.528561 containerd[1593]: time="2026-03-07T00:55:08.528341070Z" level=info msg="TearDown network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" successfully" Mar 7 00:55:08.528561 containerd[1593]: time="2026-03-07T00:55:08.528376231Z" level=info msg="StopPodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" returns successfully" Mar 7 00:55:08.529247 containerd[1593]: time="2026-03-07T00:55:08.529207678Z" level=info msg="RemovePodSandbox for \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\"" Mar 7 00:55:08.529314 containerd[1593]: time="2026-03-07T00:55:08.529255478Z" level=info msg="Forcibly stopping sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\"" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.569 [WARNING][5843] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" WorkloadEndpoint="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.569 [INFO][5843] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.569 [INFO][5843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" iface="eth0" netns="" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.569 [INFO][5843] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.569 [INFO][5843] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.591 [INFO][5850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.591 [INFO][5850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.591 [INFO][5850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.602 [WARNING][5850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.602 [INFO][5850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" HandleID="k8s-pod-network.02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Workload="ci--4081--3--6--n--4bed64c074-k8s-whisker--8459c55ddd--4rsh8-eth0" Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.606 [INFO][5850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.611346 containerd[1593]: 2026-03-07 00:55:08.608 [INFO][5843] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319" Mar 7 00:55:08.611346 containerd[1593]: time="2026-03-07T00:55:08.609842443Z" level=info msg="TearDown network for sandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" successfully" Mar 7 00:55:08.613845 containerd[1593]: time="2026-03-07T00:55:08.613785718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:08.614297 containerd[1593]: time="2026-03-07T00:55:08.614272723Z" level=info msg="RemovePodSandbox \"02911ec1ad982c9cde4f23c063868625f52bc06992e6494e20fa927b0a813319\" returns successfully" Mar 7 00:55:08.614962 containerd[1593]: time="2026-03-07T00:55:08.614908529Z" level=info msg="StopPodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\"" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.666 [WARNING][5864] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"a347d5f2-52eb-4c7f-89b6-af577611cb2a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334", Pod:"calico-apiserver-7696fc9784-bpxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddbe4888423", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.666 [INFO][5864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.666 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" iface="eth0" netns="" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.666 [INFO][5864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.666 [INFO][5864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.702 [INFO][5872] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.703 [INFO][5872] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.703 [INFO][5872] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.714 [WARNING][5872] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.714 [INFO][5872] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.716 [INFO][5872] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.726256 containerd[1593]: 2026-03-07 00:55:08.718 [INFO][5864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.726256 containerd[1593]: time="2026-03-07T00:55:08.726035408Z" level=info msg="TearDown network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" successfully" Mar 7 00:55:08.726256 containerd[1593]: time="2026-03-07T00:55:08.726067928Z" level=info msg="StopPodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" returns successfully" Mar 7 00:55:08.726730 containerd[1593]: time="2026-03-07T00:55:08.726602173Z" level=info msg="RemovePodSandbox for \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\"" Mar 7 00:55:08.726730 containerd[1593]: time="2026-03-07T00:55:08.726639533Z" level=info msg="Forcibly stopping sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\"" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.777 [WARNING][5886] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0", GenerateName:"calico-apiserver-7696fc9784-", Namespace:"calico-system", SelfLink:"", UID:"a347d5f2-52eb-4c7f-89b6-af577611cb2a", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7696fc9784", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"003e999ddc4c20b02a3d89be32aea0aa3c70cd8e382ec42dccf1a2166faf5334", Pod:"calico-apiserver-7696fc9784-bpxl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliddbe4888423", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.778 [INFO][5886] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.778 [INFO][5886] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" iface="eth0" netns="" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.778 [INFO][5886] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.778 [INFO][5886] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.801 [INFO][5894] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.802 [INFO][5894] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.802 [INFO][5894] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.815 [WARNING][5894] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.815 [INFO][5894] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" HandleID="k8s-pod-network.6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Workload="ci--4081--3--6--n--4bed64c074-k8s-calico--apiserver--7696fc9784--bpxl5-eth0" Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.818 [INFO][5894] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.822953 containerd[1593]: 2026-03-07 00:55:08.820 [INFO][5886] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51" Mar 7 00:55:08.822953 containerd[1593]: time="2026-03-07T00:55:08.822902279Z" level=info msg="TearDown network for sandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" successfully" Mar 7 00:55:08.828890 containerd[1593]: time="2026-03-07T00:55:08.828833172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:08.829042 containerd[1593]: time="2026-03-07T00:55:08.828935573Z" level=info msg="RemovePodSandbox \"6a6960208dc8be6a69b623074d5a123b7d7053907076c6bac456eec83ed5bd51\" returns successfully" Mar 7 00:55:08.829437 containerd[1593]: time="2026-03-07T00:55:08.829409137Z" level=info msg="StopPodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\"" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.876 [WARNING][5908] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"65d62761-8ce6-4210-8674-c27a3da452ab", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532", Pod:"goldmane-5b85766d88-8l2rp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3bfecd80b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.876 [INFO][5908] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.876 [INFO][5908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" iface="eth0" netns="" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.876 [INFO][5908] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.876 [INFO][5908] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.899 [INFO][5915] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.899 [INFO][5915] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.899 [INFO][5915] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.912 [WARNING][5915] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.912 [INFO][5915] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.915 [INFO][5915] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:08.920515 containerd[1593]: 2026-03-07 00:55:08.918 [INFO][5908] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:08.921039 containerd[1593]: time="2026-03-07T00:55:08.920688798Z" level=info msg="TearDown network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" successfully" Mar 7 00:55:08.921039 containerd[1593]: time="2026-03-07T00:55:08.920732518Z" level=info msg="StopPodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" returns successfully" Mar 7 00:55:08.921423 containerd[1593]: time="2026-03-07T00:55:08.921377404Z" level=info msg="RemovePodSandbox for \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\"" Mar 7 00:55:08.921423 containerd[1593]: time="2026-03-07T00:55:08.921416844Z" level=info msg="Forcibly stopping sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\"" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.962 [WARNING][5929] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"65d62761-8ce6-4210-8674-c27a3da452ab", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-4bed64c074", ContainerID:"717d047e3dcbb741c33179c37854fcf69221eb6ae8a26a2715abe89213e9c532", Pod:"goldmane-5b85766d88-8l2rp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3bfecd80b59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.962 [INFO][5929] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.962 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" iface="eth0" netns="" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.962 [INFO][5929] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.962 [INFO][5929] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.997 [INFO][5936] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.998 [INFO][5936] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:08.999 [INFO][5936] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:09.010 [WARNING][5936] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:09.010 [INFO][5936] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" HandleID="k8s-pod-network.d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Workload="ci--4081--3--6--n--4bed64c074-k8s-goldmane--5b85766d88--8l2rp-eth0" Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:09.012 [INFO][5936] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:55:09.018438 containerd[1593]: 2026-03-07 00:55:09.015 [INFO][5929] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee" Mar 7 00:55:09.019418 containerd[1593]: time="2026-03-07T00:55:09.018476727Z" level=info msg="TearDown network for sandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" successfully" Mar 7 00:55:09.022174 containerd[1593]: time="2026-03-07T00:55:09.022099442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:55:09.022278 containerd[1593]: time="2026-03-07T00:55:09.022201603Z" level=info msg="RemovePodSandbox \"d110aef9bb83501b76cbb66ec9a0a17d19ebadf314717243f66dd870f2832dee\" returns successfully" Mar 7 00:55:13.974769 kubelet[2769]: I0307 00:55:13.974707 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:55:22.096587 kubelet[2769]: I0307 00:55:22.096427 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:55:58.686311 systemd[1]: run-containerd-runc-k8s.io-2d7b5513ca3cfccc15e950fc691e938a9f2f8e76f94fd4665666b7bfb63e0de2-runc.ZvrUe2.mount: Deactivated successfully. Mar 7 00:56:01.107020 systemd[1]: run-containerd-runc-k8s.io-2d7b5513ca3cfccc15e950fc691e938a9f2f8e76f94fd4665666b7bfb63e0de2-runc.yURWBt.mount: Deactivated successfully. Mar 7 00:56:24.402877 systemd[1]: Started sshd@7-188.245.55.131:22-195.178.110.30:53826.service - OpenSSH per-connection server daemon (195.178.110.30:53826). Mar 7 00:56:24.432021 sshd[6216]: Connection closed by 195.178.110.30 port 53826 Mar 7 00:56:24.433048 systemd[1]: sshd@7-188.245.55.131:22-195.178.110.30:53826.service: Deactivated successfully. Mar 7 00:56:41.972914 systemd[1]: Started sshd@8-188.245.55.131:22-20.161.92.111:44634.service - OpenSSH per-connection server daemon (20.161.92.111:44634). Mar 7 00:56:42.563186 sshd[6286]: Accepted publickey for core from 20.161.92.111 port 44634 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:42.564663 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:42.569583 systemd-logind[1562]: New session 8 of user core. Mar 7 00:56:42.579268 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:56:43.145948 sshd[6286]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:43.155099 systemd[1]: sshd@8-188.245.55.131:22-20.161.92.111:44634.service: Deactivated successfully. Mar 7 00:56:43.163281 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:56:43.165266 systemd-logind[1562]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:56:43.166973 systemd-logind[1562]: Removed session 8. Mar 7 00:56:48.248415 systemd[1]: Started sshd@9-188.245.55.131:22-20.161.92.111:44642.service - OpenSSH per-connection server daemon (20.161.92.111:44642). Mar 7 00:56:48.838329 sshd[6324]: Accepted publickey for core from 20.161.92.111 port 44642 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:48.839879 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:48.844404 systemd-logind[1562]: New session 9 of user core. Mar 7 00:56:48.856156 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:56:49.378328 sshd[6324]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:49.383188 systemd[1]: sshd@9-188.245.55.131:22-20.161.92.111:44642.service: Deactivated successfully. Mar 7 00:56:49.387390 systemd-logind[1562]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:56:49.387731 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:56:49.389522 systemd-logind[1562]: Removed session 9. Mar 7 00:56:54.484973 systemd[1]: Started sshd@10-188.245.55.131:22-20.161.92.111:58754.service - OpenSSH per-connection server daemon (20.161.92.111:58754). Mar 7 00:56:55.088509 sshd[6339]: Accepted publickey for core from 20.161.92.111 port 58754 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:55.091085 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:55.099253 systemd-logind[1562]: New session 10 of user core. Mar 7 00:56:55.102839 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:56:55.619339 sshd[6339]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:55.623811 systemd[1]: sshd@10-188.245.55.131:22-20.161.92.111:58754.service: Deactivated successfully. Mar 7 00:56:55.631081 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:56:55.634778 systemd-logind[1562]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:56:55.636350 systemd-logind[1562]: Removed session 10. Mar 7 00:57:00.722005 systemd[1]: Started sshd@11-188.245.55.131:22-20.161.92.111:54258.service - OpenSSH per-connection server daemon (20.161.92.111:54258). Mar 7 00:57:01.314556 sshd[6390]: Accepted publickey for core from 20.161.92.111 port 54258 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:01.319885 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:01.329825 systemd-logind[1562]: New session 11 of user core. Mar 7 00:57:01.335564 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:57:01.840978 sshd[6390]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:01.847557 systemd[1]: sshd@11-188.245.55.131:22-20.161.92.111:54258.service: Deactivated successfully. Mar 7 00:57:01.854410 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:57:01.856470 systemd-logind[1562]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:57:01.858140 systemd-logind[1562]: Removed session 11. Mar 7 00:57:01.942019 systemd[1]: Started sshd@12-188.245.55.131:22-20.161.92.111:54266.service - OpenSSH per-connection server daemon (20.161.92.111:54266). Mar 7 00:57:02.539659 sshd[6446]: Accepted publickey for core from 20.161.92.111 port 54266 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:02.541015 sshd[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:02.547926 systemd-logind[1562]: New session 12 of user core. Mar 7 00:57:02.554047 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:57:03.191844 sshd[6446]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:03.197104 systemd-logind[1562]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:57:03.197422 systemd[1]: sshd@12-188.245.55.131:22-20.161.92.111:54266.service: Deactivated successfully. Mar 7 00:57:03.202430 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:57:03.205329 systemd-logind[1562]: Removed session 12. Mar 7 00:57:03.292959 systemd[1]: Started sshd@13-188.245.55.131:22-20.161.92.111:54282.service - OpenSSH per-connection server daemon (20.161.92.111:54282). Mar 7 00:57:03.880337 sshd[6458]: Accepted publickey for core from 20.161.92.111 port 54282 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:03.882838 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:03.888222 systemd-logind[1562]: New session 13 of user core. Mar 7 00:57:03.893934 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:57:04.394816 sshd[6458]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:04.398388 systemd[1]: sshd@13-188.245.55.131:22-20.161.92.111:54282.service: Deactivated successfully. Mar 7 00:57:04.403226 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:57:04.403376 systemd-logind[1562]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:57:04.405109 systemd-logind[1562]: Removed session 13. Mar 7 00:57:09.500917 systemd[1]: Started sshd@14-188.245.55.131:22-20.161.92.111:54288.service - OpenSSH per-connection server daemon (20.161.92.111:54288). Mar 7 00:57:10.086880 sshd[6473]: Accepted publickey for core from 20.161.92.111 port 54288 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:10.088813 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:10.095016 systemd-logind[1562]: New session 14 of user core. Mar 7 00:57:10.099836 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:57:10.593863 sshd[6473]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:10.600445 systemd[1]: sshd@14-188.245.55.131:22-20.161.92.111:54288.service: Deactivated successfully. Mar 7 00:57:10.606904 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:57:10.610162 systemd-logind[1562]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:57:10.611526 systemd-logind[1562]: Removed session 14. Mar 7 00:57:10.700373 systemd[1]: Started sshd@15-188.245.55.131:22-20.161.92.111:40580.service - OpenSSH per-connection server daemon (20.161.92.111:40580). Mar 7 00:57:11.287725 sshd[6486]: Accepted publickey for core from 20.161.92.111 port 40580 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:11.290285 sshd[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:11.295573 systemd-logind[1562]: New session 15 of user core. Mar 7 00:57:11.299807 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:57:11.924932 sshd[6486]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:11.930242 systemd[1]: sshd@15-188.245.55.131:22-20.161.92.111:40580.service: Deactivated successfully. Mar 7 00:57:11.935043 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:57:11.936153 systemd-logind[1562]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:57:11.937218 systemd-logind[1562]: Removed session 15. Mar 7 00:57:12.025853 systemd[1]: Started sshd@16-188.245.55.131:22-20.161.92.111:40592.service - OpenSSH per-connection server daemon (20.161.92.111:40592). Mar 7 00:57:12.617005 sshd[6498]: Accepted publickey for core from 20.161.92.111 port 40592 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:12.619018 sshd[6498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:12.624231 systemd-logind[1562]: New session 16 of user core. Mar 7 00:57:12.628771 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:57:13.777184 sshd[6498]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:13.784092 systemd[1]: sshd@16-188.245.55.131:22-20.161.92.111:40592.service: Deactivated successfully. Mar 7 00:57:13.789037 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:57:13.789683 systemd-logind[1562]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:57:13.791925 systemd-logind[1562]: Removed session 16. Mar 7 00:57:13.877856 systemd[1]: Started sshd@17-188.245.55.131:22-20.161.92.111:40608.service - OpenSSH per-connection server daemon (20.161.92.111:40608). Mar 7 00:57:14.471702 sshd[6527]: Accepted publickey for core from 20.161.92.111 port 40608 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:14.473459 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:14.486896 systemd-logind[1562]: New session 17 of user core. Mar 7 00:57:14.491833 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:57:15.201889 sshd[6527]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:15.210583 systemd[1]: sshd@17-188.245.55.131:22-20.161.92.111:40608.service: Deactivated successfully. Mar 7 00:57:15.211410 systemd-logind[1562]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:57:15.214998 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:57:15.217617 systemd-logind[1562]: Removed session 17. Mar 7 00:57:15.307049 systemd[1]: Started sshd@18-188.245.55.131:22-20.161.92.111:40620.service - OpenSSH per-connection server daemon (20.161.92.111:40620). Mar 7 00:57:15.885611 update_engine[1569]: I20260307 00:57:15.885468 1569 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 7 00:57:15.885611 update_engine[1569]: I20260307 00:57:15.885615 1569 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 7 00:57:15.886318 update_engine[1569]: I20260307 00:57:15.886024 1569 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 7 00:57:15.886946 update_engine[1569]: I20260307 00:57:15.886816 1569 omaha_request_params.cc:62] Current group set to lts Mar 7 00:57:15.888602 update_engine[1569]: I20260307 00:57:15.888458 1569 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 7 00:57:15.888602 update_engine[1569]: I20260307 00:57:15.888564 1569 update_attempter.cc:643] Scheduling an action processor start. Mar 7 00:57:15.888805 update_engine[1569]: I20260307 00:57:15.888624 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 00:57:15.890935 update_engine[1569]: I20260307 00:57:15.890885 1569 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 7 00:57:15.892151 update_engine[1569]: I20260307 00:57:15.891008 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 00:57:15.892151 update_engine[1569]: I20260307 00:57:15.891026 1569 omaha_request_action.cc:272] Request: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: Mar 7 00:57:15.892151 update_engine[1569]: I20260307 00:57:15.891036 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:57:15.896277 update_engine[1569]: I20260307 00:57:15.895815 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:57:15.896277 update_engine[1569]: I20260307 00:57:15.896206 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:57:15.897405 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 7 00:57:15.897720 update_engine[1569]: E20260307 00:57:15.897308 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:57:15.897720 update_engine[1569]: I20260307 00:57:15.897442 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 7 00:57:15.898782 sshd[6560]: Accepted publickey for core from 20.161.92.111 port 40620 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:15.900450 sshd[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:15.905712 systemd-logind[1562]: New session 18 of user core. Mar 7 00:57:15.910853 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:57:16.432820 sshd[6560]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:16.439867 systemd[1]: sshd@18-188.245.55.131:22-20.161.92.111:40620.service: Deactivated successfully. Mar 7 00:57:16.443361 systemd-logind[1562]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:57:16.444759 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:57:16.446861 systemd-logind[1562]: Removed session 18. Mar 7 00:57:21.536868 systemd[1]: Started sshd@19-188.245.55.131:22-20.161.92.111:58470.service - OpenSSH per-connection server daemon (20.161.92.111:58470). Mar 7 00:57:22.126895 sshd[6575]: Accepted publickey for core from 20.161.92.111 port 58470 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:22.129015 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:22.134731 systemd-logind[1562]: New session 19 of user core. Mar 7 00:57:22.140933 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:57:22.667822 sshd[6575]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:22.673937 systemd-logind[1562]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:57:22.674402 systemd[1]: sshd@19-188.245.55.131:22-20.161.92.111:58470.service: Deactivated successfully. Mar 7 00:57:22.678939 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:57:22.680294 systemd-logind[1562]: Removed session 19. Mar 7 00:57:25.889334 update_engine[1569]: I20260307 00:57:25.888581 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:57:25.889334 update_engine[1569]: I20260307 00:57:25.888922 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:57:25.889334 update_engine[1569]: I20260307 00:57:25.889238 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:57:25.890679 update_engine[1569]: E20260307 00:57:25.890645 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:57:25.890827 update_engine[1569]: I20260307 00:57:25.890806 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 7 00:57:27.769833 systemd[1]: Started sshd@20-188.245.55.131:22-20.161.92.111:58482.service - OpenSSH per-connection server daemon (20.161.92.111:58482). Mar 7 00:57:28.361824 sshd[6599]: Accepted publickey for core from 20.161.92.111 port 58482 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:28.363824 sshd[6599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:28.370809 systemd-logind[1562]: New session 20 of user core. Mar 7 00:57:28.377213 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:57:28.894337 sshd[6599]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:28.900731 systemd[1]: sshd@20-188.245.55.131:22-20.161.92.111:58482.service: Deactivated successfully. Mar 7 00:57:28.903302 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:57:28.904916 systemd-logind[1562]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:57:28.906192 systemd-logind[1562]: Removed session 20. Mar 7 00:57:35.890137 update_engine[1569]: I20260307 00:57:35.889950 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:57:35.891299 update_engine[1569]: I20260307 00:57:35.890774 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:57:35.891299 update_engine[1569]: I20260307 00:57:35.891036 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:57:35.892344 update_engine[1569]: E20260307 00:57:35.892187 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:57:35.892344 update_engine[1569]: I20260307 00:57:35.892302 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 7 00:57:44.231751 kubelet[2769]: E0307 00:57:44.231459 2769 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37634->10.0.0.2:2379: read: connection timed out" Mar 7 00:57:44.248157 containerd[1593]: time="2026-03-07T00:57:44.243148706Z" level=info msg="shim disconnected" id=2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc namespace=k8s.io Mar 7 00:57:44.248157 containerd[1593]: time="2026-03-07T00:57:44.243206987Z" level=warning msg="cleaning up after shim disconnected" id=2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc namespace=k8s.io Mar 7 00:57:44.248157 containerd[1593]: time="2026-03-07T00:57:44.243215307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:44.247904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc-rootfs.mount: Deactivated successfully. Mar 7 00:57:44.279243 containerd[1593]: time="2026-03-07T00:57:44.279185400Z" level=info msg="shim disconnected" id=5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477 namespace=k8s.io Mar 7 00:57:44.280218 containerd[1593]: time="2026-03-07T00:57:44.279642605Z" level=warning msg="cleaning up after shim disconnected" id=5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477 namespace=k8s.io Mar 7 00:57:44.280218 containerd[1593]: time="2026-03-07T00:57:44.279665365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:44.284803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477-rootfs.mount: Deactivated successfully. Mar 7 00:57:44.561456 kubelet[2769]: I0307 00:57:44.561392 2769 scope.go:117] "RemoveContainer" containerID="5178d5af6c020c04bfd027b688cf2e713ed28670aafae6b3fde8b22a52301477" Mar 7 00:57:44.561730 kubelet[2769]: I0307 00:57:44.561689 2769 scope.go:117] "RemoveContainer" containerID="2204708b77eb2a4e144e1f21c838544a5fcdaf750ae37d82ae9d0bcfec2361fc" Mar 7 00:57:44.564583 containerd[1593]: time="2026-03-07T00:57:44.564291355Z" level=info msg="CreateContainer within sandbox \"934c8079256d2fd4729aa354f8827edc0936864af1165a846aa21ba3bccbdace\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 00:57:44.564583 containerd[1593]: time="2026-03-07T00:57:44.564511198Z" level=info msg="CreateContainer within sandbox \"1ebcd748f1fc68c3f78a753d5a149cde22e11a75a261a3654139db3f9302aba2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 00:57:44.578819 containerd[1593]: time="2026-03-07T00:57:44.578779401Z" level=info msg="CreateContainer within sandbox \"934c8079256d2fd4729aa354f8827edc0936864af1165a846aa21ba3bccbdace\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ed4320a315246648bebba461e50855d9abdfa5eb7e38be34037113ef123646c8\"" Mar 7 00:57:44.579526 containerd[1593]: time="2026-03-07T00:57:44.579501970Z" level=info msg="StartContainer for \"ed4320a315246648bebba461e50855d9abdfa5eb7e38be34037113ef123646c8\"" Mar 7 00:57:44.580749 containerd[1593]: time="2026-03-07T00:57:44.580712224Z" level=info msg="CreateContainer within sandbox \"1ebcd748f1fc68c3f78a753d5a149cde22e11a75a261a3654139db3f9302aba2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6b7dc7596c17feb4431266691354526e17e51a0f482243ed87c852bb3edd9216\"" Mar 7 00:57:44.581262 containerd[1593]: time="2026-03-07T00:57:44.581214669Z" level=info msg="StartContainer for \"6b7dc7596c17feb4431266691354526e17e51a0f482243ed87c852bb3edd9216\"" Mar 7 00:57:44.649451 containerd[1593]: time="2026-03-07T00:57:44.649259451Z" level=info msg="StartContainer for \"ed4320a315246648bebba461e50855d9abdfa5eb7e38be34037113ef123646c8\" returns successfully" Mar 7 00:57:44.665362 containerd[1593]: time="2026-03-07T00:57:44.665197474Z" level=info msg="StartContainer for \"6b7dc7596c17feb4431266691354526e17e51a0f482243ed87c852bb3edd9216\" returns successfully" Mar 7 00:57:44.914616 containerd[1593]: time="2026-03-07T00:57:44.909635162Z" level=info msg="shim disconnected" id=dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a namespace=k8s.io Mar 7 00:57:44.914616 containerd[1593]: time="2026-03-07T00:57:44.909698323Z" level=warning msg="cleaning up after shim disconnected" id=dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a namespace=k8s.io Mar 7 00:57:44.914616 containerd[1593]: time="2026-03-07T00:57:44.909708683Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:44.920770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a-rootfs.mount: Deactivated successfully. Mar 7 00:57:45.568874 kubelet[2769]: I0307 00:57:45.568841 2769 scope.go:117] "RemoveContainer" containerID="dc20bea1776dfb9950fb5c9417d55ff774e9d6d0f291f39b9dc5b512b392689a" Mar 7 00:57:45.572559 containerd[1593]: time="2026-03-07T00:57:45.571096193Z" level=info msg="CreateContainer within sandbox \"eb76bf62440752e3ceda19f6f74eeccec6c69fcb820a2702e6309b010058604b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:57:45.593020 containerd[1593]: time="2026-03-07T00:57:45.592786926Z" level=info msg="CreateContainer within sandbox \"eb76bf62440752e3ceda19f6f74eeccec6c69fcb820a2702e6309b010058604b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6ef0c273f3478f26c9f1a8c19dccfd5de9b4d799b891e0dec1e6e9ff63c5ee81\"" Mar 7 00:57:45.594588 containerd[1593]: time="2026-03-07T00:57:45.593593536Z" level=info msg="StartContainer for \"6ef0c273f3478f26c9f1a8c19dccfd5de9b4d799b891e0dec1e6e9ff63c5ee81\"" Mar 7 00:57:45.686741 containerd[1593]: time="2026-03-07T00:57:45.686690784Z" level=info msg="StartContainer for \"6ef0c273f3478f26c9f1a8c19dccfd5de9b4d799b891e0dec1e6e9ff63c5ee81\" returns successfully" Mar 7 00:57:45.895102 update_engine[1569]: I20260307 00:57:45.894555 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:57:45.895102 update_engine[1569]: I20260307 00:57:45.894785 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:57:45.895102 update_engine[1569]: I20260307 00:57:45.894984 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:57:45.896146 update_engine[1569]: E20260307 00:57:45.896116 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:57:45.896277 update_engine[1569]: I20260307 00:57:45.896260 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 00:57:45.896339 update_engine[1569]: I20260307 00:57:45.896323 1569 omaha_request_action.cc:617] Omaha request response: Mar 7 00:57:45.896989 update_engine[1569]: E20260307 00:57:45.896462 1569 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896487 1569 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896493 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896498 1569 update_attempter.cc:306] Processing Done. Mar 7 00:57:45.896989 update_engine[1569]: E20260307 00:57:45.896513 1569 update_attempter.cc:619] Update failed. Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896519 1569 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896524 1569 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896544 1569 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896610 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896629 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896636 1569 omaha_request_action.cc:272] Request: Mar 7 00:57:45.896989 update_engine[1569]: Mar 7 00:57:45.896989 update_engine[1569]: Mar 7 00:57:45.896989 update_engine[1569]: Mar 7 00:57:45.896989 update_engine[1569]: Mar 7 00:57:45.896989 update_engine[1569]: Mar 7 00:57:45.896989 update_engine[1569]: Mar 7 00:57:45.896989 update_engine[1569]: I20260307 00:57:45.896642 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 7 00:57:45.897385 update_engine[1569]: I20260307 00:57:45.896768 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 7 00:57:45.897385 update_engine[1569]: I20260307 00:57:45.896922 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 7 00:57:45.899871 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 7 00:57:45.900505 update_engine[1569]: E20260307 00:57:45.900121 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900168 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900177 1569 omaha_request_action.cc:617] Omaha request response: Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900183 1569 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900188 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900192 1569 update_attempter.cc:306] Processing Done. Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900198 1569 update_attempter.cc:310] Error event sent. Mar 7 00:57:45.900505 update_engine[1569]: I20260307 00:57:45.900206 1569 update_check_scheduler.cc:74] Next update check in 40m6s Mar 7 00:57:45.900834 locksmithd[1621]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 7 00:57:49.164995 kubelet[2769]: E0307 00:57:49.158567 2769 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:37436->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-4bed64c074.189a69301db10da8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-4bed64c074,UID:ecc4f08f49f3a779af79aa3a0d525e9e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-4bed64c074,},FirstTimestamp:2026-03-07 00:57:38.701200808 +0000 UTC m=+211.210764308,LastTimestamp:2026-03-07 00:57:38.701200808 +0000 UTC m=+211.210764308,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-4bed64c074,}"