Mar 7 00:52:38.942295 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 7 00:52:38.942331 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:52:38.942344 kernel: KASLR enabled Mar 7 00:52:38.942350 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Mar 7 00:52:38.942356 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Mar 7 00:52:38.942362 kernel: random: crng init done Mar 7 00:52:38.942369 kernel: ACPI: Early table checksum verification disabled Mar 7 00:52:38.942375 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Mar 7 00:52:38.942381 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Mar 7 00:52:38.942388 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942395 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942401 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942407 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942413 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942420 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942429 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942435 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942442 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 7 00:52:38.942449 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Mar 7 00:52:38.942455 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Mar 7 00:52:38.942461 kernel: NUMA: Failed to initialise from firmware Mar 7 00:52:38.942468 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:52:38.942474 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Mar 7 00:52:38.942480 kernel: Zone ranges: Mar 7 00:52:38.942487 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:52:38.942495 kernel: DMA32 empty Mar 7 00:52:38.942501 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Mar 7 00:52:38.942507 kernel: Movable zone start for each node Mar 7 00:52:38.942514 kernel: Early memory node ranges Mar 7 00:52:38.942520 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Mar 7 00:52:38.942527 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Mar 7 00:52:38.942533 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Mar 7 00:52:38.942539 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Mar 7 00:52:38.942546 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Mar 7 00:52:38.942552 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Mar 7 00:52:38.942558 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Mar 7 00:52:38.942565 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Mar 7 00:52:38.942574 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Mar 7 00:52:38.942580 kernel: psci: probing for conduit method from ACPI. Mar 7 00:52:38.942587 kernel: psci: PSCIv1.1 detected in firmware. Mar 7 00:52:38.942596 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:52:38.942603 kernel: psci: Trusted OS migration not required Mar 7 00:52:38.942610 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:52:38.942619 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 7 00:52:38.942626 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:52:38.942632 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:52:38.942639 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:52:38.942646 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:52:38.942653 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:52:38.942660 kernel: CPU features: detected: Hardware dirty bit management Mar 7 00:52:38.942666 kernel: CPU features: detected: Spectre-v4 Mar 7 00:52:38.942673 kernel: CPU features: detected: Spectre-BHB Mar 7 00:52:38.942680 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 7 00:52:38.942689 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 7 00:52:38.942695 kernel: CPU features: detected: ARM erratum 1418040 Mar 7 00:52:38.942702 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 7 00:52:38.942709 kernel: alternatives: applying boot alternatives Mar 7 00:52:38.942717 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:52:38.942771 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:52:38.942780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:52:38.942787 kernel: Fallback order for Node 0: 0 Mar 7 00:52:38.942794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Mar 7 00:52:38.942801 kernel: Policy zone: Normal Mar 7 00:52:38.942808 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:52:38.942818 kernel: software IO TLB: area num 2. Mar 7 00:52:38.942826 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Mar 7 00:52:38.942834 kernel: Memory: 3882812K/4096000K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 213188K reserved, 0K cma-reserved) Mar 7 00:52:38.942841 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:52:38.942847 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:52:38.942855 kernel: rcu: RCU event tracing is enabled. Mar 7 00:52:38.942863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:52:38.942870 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:52:38.942877 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:52:38.942884 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:52:38.942891 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:52:38.942897 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:52:38.942906 kernel: GICv3: 256 SPIs implemented Mar 7 00:52:38.942913 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:52:38.942919 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:52:38.942926 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 7 00:52:38.942933 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 7 00:52:38.942939 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 7 00:52:38.942946 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:52:38.942954 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:52:38.942961 kernel: GICv3: using LPI property table @0x00000001000e0000 Mar 7 00:52:38.942968 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Mar 7 00:52:38.942975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:52:38.942983 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:52:38.942991 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 7 00:52:38.942998 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 7 00:52:38.943005 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 7 00:52:38.943012 kernel: Console: colour dummy device 80x25 Mar 7 00:52:38.943019 kernel: ACPI: Core revision 20230628 Mar 7 00:52:38.943027 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 7 00:52:38.943034 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:52:38.943041 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:52:38.943863 kernel: landlock: Up and running. Mar 7 00:52:38.943879 kernel: SELinux: Initializing. Mar 7 00:52:38.943887 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:52:38.943894 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:52:38.943902 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:52:38.943909 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:52:38.943916 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:52:38.943924 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:52:38.943931 kernel: Platform MSI: ITS@0x8080000 domain created Mar 7 00:52:38.943938 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 7 00:52:38.943948 kernel: Remapping and enabling EFI services. Mar 7 00:52:38.943955 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:52:38.943962 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:52:38.943969 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 7 00:52:38.943977 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Mar 7 00:52:38.943984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 7 00:52:38.943991 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 7 00:52:38.943998 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:52:38.944005 kernel: SMP: Total of 2 processors activated. Mar 7 00:52:38.944014 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:52:38.944021 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 7 00:52:38.944028 kernel: CPU features: detected: Common not Private translations Mar 7 00:52:38.944041 kernel: CPU features: detected: CRC32 instructions Mar 7 00:52:38.944073 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 7 00:52:38.944081 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 7 00:52:38.944088 kernel: CPU features: detected: LSE atomic instructions Mar 7 00:52:38.944096 kernel: CPU features: detected: Privileged Access Never Mar 7 00:52:38.944103 kernel: CPU features: detected: RAS Extension Support Mar 7 00:52:38.944113 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 7 00:52:38.944121 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:52:38.944129 kernel: alternatives: applying system-wide alternatives Mar 7 00:52:38.944138 kernel: devtmpfs: initialized Mar 7 00:52:38.944147 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:52:38.944154 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:52:38.944162 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:52:38.944170 kernel: SMBIOS 3.0.0 present. Mar 7 00:52:38.944179 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Mar 7 00:52:38.944186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:52:38.944194 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:52:38.944201 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:52:38.944209 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:52:38.944216 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:52:38.944227 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Mar 7 00:52:38.944235 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:52:38.944244 kernel: cpuidle: using governor menu Mar 7 00:52:38.944254 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:52:38.944261 kernel: ASID allocator initialised with 32768 entries Mar 7 00:52:38.944268 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:52:38.944276 kernel: Serial: AMBA PL011 UART driver Mar 7 00:52:38.944284 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 7 00:52:38.944291 kernel: Modules: 0 pages in range for non-PLT usage Mar 7 00:52:38.944299 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:52:38.944306 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:52:38.944314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:52:38.944323 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:52:38.944330 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:52:38.944338 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:52:38.944345 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:52:38.944353 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:52:38.944360 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:52:38.944368 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:52:38.944375 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:52:38.944382 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:52:38.944392 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:52:38.944400 kernel: ACPI: Interpreter enabled Mar 7 00:52:38.944408 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:52:38.944416 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:52:38.944423 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 7 00:52:38.944431 kernel: printk: console [ttyAMA0] enabled Mar 7 00:52:38.944438 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 7 00:52:38.944626 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:52:38.944713 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:52:38.944802 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:52:38.944871 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 7 00:52:38.944935 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 7 00:52:38.944944 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 7 00:52:38.944952 kernel: PCI host bridge to bus 0000:00 Mar 7 00:52:38.945026 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 7 00:52:38.945532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:52:38.945607 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 7 00:52:38.945666 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 7 00:52:38.945813 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 7 00:52:38.945904 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Mar 7 00:52:38.945974 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Mar 7 00:52:38.946042 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:52:38.946155 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.946225 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Mar 7 00:52:38.946307 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.946375 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Mar 7 00:52:38.946448 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.946518 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Mar 7 00:52:38.946596 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.946664 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Mar 7 00:52:38.946755 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.946828 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Mar 7 00:52:38.946903 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.946971 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Mar 7 00:52:38.947146 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.947234 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Mar 7 00:52:38.947311 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.947380 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Mar 7 00:52:38.947453 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Mar 7 00:52:38.947520 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Mar 7 00:52:38.947608 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Mar 7 00:52:38.947675 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Mar 7 00:52:38.947776 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:52:38.947853 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Mar 7 00:52:38.947922 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:52:38.947993 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:52:38.948106 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 7 00:52:38.948204 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Mar 7 00:52:38.948301 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Mar 7 00:52:38.948381 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Mar 7 00:52:38.948467 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Mar 7 00:52:38.948580 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Mar 7 00:52:38.948664 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Mar 7 00:52:38.948811 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 7 00:52:38.948895 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Mar 7 00:52:38.948964 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Mar 7 00:52:38.949114 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Mar 7 00:52:38.949205 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Mar 7 00:52:38.949273 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:52:38.949358 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Mar 7 00:52:38.949425 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Mar 7 00:52:38.949493 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Mar 7 00:52:38.949559 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Mar 7 00:52:38.949629 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Mar 7 00:52:38.949697 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:52:38.949787 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Mar 7 00:52:38.949868 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Mar 7 00:52:38.949952 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Mar 7 00:52:38.950031 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Mar 7 00:52:38.950130 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 7 00:52:38.950199 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:52:38.950267 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Mar 7 00:52:38.950346 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 7 00:52:38.950415 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Mar 7 00:52:38.950486 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Mar 7 00:52:38.950559 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 7 00:52:38.950627 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:52:38.950693 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Mar 7 00:52:38.950785 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 7 00:52:38.950856 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:52:38.950922 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Mar 7 00:52:38.950997 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 7 00:52:38.954267 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:52:38.954385 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Mar 7 00:52:38.954461 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 7 00:52:38.954531 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:52:38.954596 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Mar 7 00:52:38.954670 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 7 00:52:38.954793 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:52:38.954883 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Mar 7 00:52:38.954956 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Mar 7 00:52:38.955023 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:38.956452 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Mar 7 00:52:38.956550 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:38.956629 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Mar 7 00:52:38.956706 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:38.956840 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Mar 7 00:52:38.956923 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:38.957005 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Mar 7 00:52:38.958190 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:38.958292 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Mar 7 00:52:38.958375 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:38.958462 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Mar 7 00:52:38.958538 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:38.958622 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Mar 7 00:52:38.958692 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:38.958783 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Mar 7 00:52:38.958854 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:38.958928 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Mar 7 00:52:38.959000 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Mar 7 00:52:38.960139 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Mar 7 00:52:38.960234 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 7 00:52:38.960306 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Mar 7 00:52:38.960378 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 7 00:52:38.960448 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Mar 7 00:52:38.960515 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 7 00:52:38.960584 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Mar 7 00:52:38.960661 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 7 00:52:38.960746 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Mar 7 00:52:38.960818 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 7 00:52:38.960891 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Mar 7 00:52:38.960958 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 7 00:52:38.961029 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Mar 7 00:52:38.961176 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 7 00:52:38.961252 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Mar 7 00:52:38.961326 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 7 00:52:38.961397 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Mar 7 00:52:38.961464 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Mar 7 00:52:38.961537 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Mar 7 00:52:38.961615 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Mar 7 00:52:38.961685 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 7 00:52:38.961768 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Mar 7 00:52:38.961838 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Mar 7 00:52:38.961909 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 7 00:52:38.961975 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Mar 7 00:52:38.962040 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:38.962178 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Mar 7 00:52:38.962256 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Mar 7 00:52:38.962321 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 7 00:52:38.962384 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Mar 7 00:52:38.962448 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:38.962521 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Mar 7 00:52:38.962589 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Mar 7 00:52:38.962657 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Mar 7 00:52:38.962760 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 7 00:52:38.962849 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Mar 7 00:52:38.962919 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:38.962996 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Mar 7 00:52:38.963083 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Mar 7 00:52:38.963153 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 7 00:52:38.963218 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Mar 7 00:52:38.963283 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:38.963357 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Mar 7 00:52:38.963432 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Mar 7 00:52:38.963500 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Mar 7 00:52:38.963565 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 7 00:52:38.963631 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Mar 7 00:52:38.963696 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:38.963807 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Mar 7 00:52:38.963884 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Mar 7 00:52:38.963956 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Mar 7 00:52:38.964027 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 7 00:52:38.964189 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Mar 7 00:52:38.964257 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:38.964331 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Mar 7 00:52:38.964397 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Mar 7 00:52:38.964475 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Mar 7 00:52:38.964562 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Mar 7 00:52:38.964629 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 7 00:52:38.964701 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Mar 7 00:52:38.964779 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:38.964848 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Mar 7 00:52:38.964913 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 7 00:52:38.964978 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Mar 7 00:52:38.965041 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:38.965216 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Mar 7 00:52:38.965281 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Mar 7 00:52:38.965352 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Mar 7 00:52:38.965420 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:38.965488 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 7 00:52:38.965547 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:52:38.965604 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 7 00:52:38.965684 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 7 00:52:38.965764 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Mar 7 00:52:38.965833 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Mar 7 00:52:38.965902 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Mar 7 00:52:38.965963 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Mar 7 00:52:38.966021 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Mar 7 00:52:38.966106 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Mar 7 00:52:38.966170 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Mar 7 00:52:38.966236 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Mar 7 00:52:38.966305 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Mar 7 00:52:38.966368 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Mar 7 00:52:38.966444 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Mar 7 00:52:38.966515 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Mar 7 00:52:38.966576 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Mar 7 00:52:38.966637 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Mar 7 00:52:38.966709 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Mar 7 00:52:38.966819 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Mar 7 00:52:38.966890 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Mar 7 00:52:38.966961 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Mar 7 00:52:38.967026 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Mar 7 00:52:38.967120 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Mar 7 00:52:38.967193 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Mar 7 00:52:38.967254 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Mar 7 00:52:38.967315 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Mar 7 00:52:38.967383 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Mar 7 00:52:38.967450 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Mar 7 00:52:38.967513 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Mar 7 00:52:38.967523 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:52:38.967531 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:52:38.967539 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:52:38.967547 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:52:38.967555 kernel: iommu: Default domain type: Translated Mar 7 00:52:38.967563 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:52:38.967571 kernel: efivars: Registered efivars operations Mar 7 00:52:38.967579 kernel: vgaarb: loaded Mar 7 00:52:38.967589 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:52:38.967597 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:52:38.967604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:52:38.967612 kernel: pnp: PnP ACPI init Mar 7 00:52:38.967696 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 7 00:52:38.967707 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:52:38.967715 kernel: NET: Registered PF_INET protocol family Mar 7 00:52:38.967734 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:52:38.967747 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:52:38.967755 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:52:38.967763 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:52:38.967771 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:52:38.967779 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:52:38.967787 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:52:38.967795 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:52:38.967803 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:52:38.967884 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Mar 7 00:52:38.967898 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:52:38.967906 kernel: kvm [1]: HYP mode not available Mar 7 00:52:38.967914 kernel: Initialise system trusted keyrings Mar 7 00:52:38.967922 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:52:38.967929 kernel: Key type asymmetric registered Mar 7 00:52:38.967937 kernel: Asymmetric key parser 'x509' registered Mar 7 00:52:38.967945 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:52:38.967953 kernel: io scheduler mq-deadline registered Mar 7 00:52:38.967961 kernel: io scheduler kyber registered Mar 7 00:52:38.967971 kernel: io scheduler bfq registered Mar 7 00:52:38.967980 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:52:38.968093 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Mar 7 00:52:38.968167 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Mar 7 00:52:38.968232 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.968299 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Mar 7 00:52:38.968364 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Mar 7 00:52:38.968432 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.968499 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Mar 7 00:52:38.968564 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Mar 7 00:52:38.968629 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.968698 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Mar 7 00:52:38.968778 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Mar 7 00:52:38.968853 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.968932 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Mar 7 00:52:38.969020 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Mar 7 00:52:38.969141 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.969217 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Mar 7 00:52:38.969287 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Mar 7 00:52:38.969360 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.969430 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Mar 7 00:52:38.969497 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Mar 7 00:52:38.969563 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.969632 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Mar 7 00:52:38.969698 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Mar 7 00:52:38.969810 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.969824 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Mar 7 00:52:38.969893 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Mar 7 00:52:38.969961 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Mar 7 00:52:38.970026 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 7 00:52:38.970037 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:52:38.970126 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:52:38.970136 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:52:38.970218 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Mar 7 00:52:38.970290 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Mar 7 00:52:38.970303 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:52:38.970311 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:52:38.970378 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Mar 7 00:52:38.970389 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Mar 7 00:52:38.970397 kernel: thunder_xcv, ver 1.0 Mar 7 00:52:38.970408 kernel: thunder_bgx, ver 1.0 Mar 7 00:52:38.970415 kernel: nicpf, ver 1.0 Mar 7 00:52:38.970423 kernel: nicvf, ver 1.0 Mar 7 00:52:38.970521 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:52:38.970605 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:52:38 UTC (1772844758) Mar 7 00:52:38.970616 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:52:38.970625 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 7 00:52:38.970633 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:52:38.970644 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:52:38.970652 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:52:38.970660 kernel: Segment Routing with IPv6 Mar 7 00:52:38.970668 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:52:38.970675 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:52:38.970683 kernel: Key type dns_resolver registered Mar 7 00:52:38.970691 kernel: registered taskstats version 1 Mar 7 00:52:38.970699 kernel: Loading compiled-in X.509 certificates Mar 7 00:52:38.970707 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:52:38.970716 kernel: Key type .fscrypt registered Mar 7 00:52:38.970735 kernel: Key type fscrypt-provisioning registered Mar 7 00:52:38.970744 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:52:38.970752 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:52:38.970760 kernel: ima: No architecture policies found Mar 7 00:52:38.970768 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:52:38.970775 kernel: clk: Disabling unused clocks Mar 7 00:52:38.970783 kernel: Freeing unused kernel memory: 39424K Mar 7 00:52:38.970791 kernel: Run /init as init process Mar 7 00:52:38.970801 kernel: with arguments: Mar 7 00:52:38.970809 kernel: /init Mar 7 00:52:38.970817 kernel: with environment: Mar 7 00:52:38.970825 kernel: HOME=/ Mar 7 00:52:38.970832 kernel: TERM=linux Mar 7 00:52:38.970842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:52:38.970852 systemd[1]: Detected virtualization kvm. Mar 7 00:52:38.970861 systemd[1]: Detected architecture arm64. Mar 7 00:52:38.970871 systemd[1]: Running in initrd. Mar 7 00:52:38.970879 systemd[1]: No hostname configured, using default hostname. Mar 7 00:52:38.970887 systemd[1]: Hostname set to . Mar 7 00:52:38.970895 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:52:38.970904 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:52:38.970913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:38.970921 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:38.970930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:52:38.970940 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:52:38.970948 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:52:38.970957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:52:38.970966 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:52:38.970975 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:52:38.970983 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:38.970992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:38.971003 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:52:38.971013 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:52:38.971021 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:52:38.971029 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:52:38.971038 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:52:38.971124 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:52:38.971134 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:52:38.971143 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:52:38.971159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:38.971168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:38.971177 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:38.971185 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:52:38.971194 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:52:38.971203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:52:38.971211 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:52:38.971220 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:52:38.971228 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:52:38.971238 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:52:38.971247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:38.971258 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:52:38.971267 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:38.971278 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:52:38.971319 systemd-journald[237]: Collecting audit messages is disabled. Mar 7 00:52:38.971345 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:52:38.971354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:52:38.971364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:38.971374 systemd-journald[237]: Journal started Mar 7 00:52:38.971395 systemd-journald[237]: Runtime Journal (/run/log/journal/468eee0d36c2448394292e7fa7b26813) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:52:38.948693 systemd-modules-load[238]: Inserted module 'overlay' Mar 7 00:52:38.974071 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:52:38.975874 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 7 00:52:38.978157 kernel: Bridge firewalling registered Mar 7 00:52:38.978196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:38.983128 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:52:38.983203 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:52:38.988081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:39.004356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:52:39.007318 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:52:39.008311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:39.013132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:39.022793 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:52:39.023874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:39.037153 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:39.043433 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:52:39.057177 dracut-cmdline[269]: dracut-dracut-053 Mar 7 00:52:39.061715 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:52:39.084869 systemd-resolved[274]: Positive Trust Anchors: Mar 7 00:52:39.084890 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:52:39.084923 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:52:39.091090 systemd-resolved[274]: Defaulting to hostname 'linux'. Mar 7 00:52:39.092978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:52:39.095662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:39.172239 kernel: SCSI subsystem initialized Mar 7 00:52:39.177090 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:52:39.185108 kernel: iscsi: registered transport (tcp) Mar 7 00:52:39.199130 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:52:39.199242 kernel: QLogic iSCSI HBA Driver Mar 7 00:52:39.253857 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:52:39.264384 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:52:39.285789 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:52:39.285854 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:52:39.285866 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:52:39.339127 kernel: raid6: neonx8 gen() 15730 MB/s Mar 7 00:52:39.356124 kernel: raid6: neonx4 gen() 15511 MB/s Mar 7 00:52:39.373139 kernel: raid6: neonx2 gen() 13051 MB/s Mar 7 00:52:39.390122 kernel: raid6: neonx1 gen() 10291 MB/s Mar 7 00:52:39.407116 kernel: raid6: int64x8 gen() 6870 MB/s Mar 7 00:52:39.424134 kernel: raid6: int64x4 gen() 7126 MB/s Mar 7 00:52:39.441157 kernel: raid6: int64x2 gen() 6035 MB/s Mar 7 00:52:39.458115 kernel: raid6: int64x1 gen() 4992 MB/s Mar 7 00:52:39.458197 kernel: raid6: using algorithm neonx8 gen() 15730 MB/s Mar 7 00:52:39.475132 kernel: raid6: .... xor() 11800 MB/s, rmw enabled Mar 7 00:52:39.475227 kernel: raid6: using neon recovery algorithm Mar 7 00:52:39.482355 kernel: xor: measuring software checksum speed Mar 7 00:52:39.482440 kernel: 8regs : 16701 MB/sec Mar 7 00:52:39.482461 kernel: 32regs : 16576 MB/sec Mar 7 00:52:39.482494 kernel: arm64_neon : 22803 MB/sec Mar 7 00:52:39.483173 kernel: xor: using function: arm64_neon (22803 MB/sec) Mar 7 00:52:39.536102 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:52:39.552222 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:52:39.559420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:39.588557 systemd-udevd[456]: Using default interface naming scheme 'v255'. Mar 7 00:52:39.593082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:39.606391 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:52:39.622863 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Mar 7 00:52:39.669500 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:52:39.677350 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:52:39.745081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:39.752363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:52:39.783076 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:52:39.785882 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:52:39.788337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:39.790101 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:52:39.796239 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:52:39.813165 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:52:39.864272 kernel: scsi host0: Virtio SCSI HBA Mar 7 00:52:39.878251 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 7 00:52:39.878370 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Mar 7 00:52:39.886814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:52:39.888300 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:39.893844 kernel: ACPI: bus type USB registered Mar 7 00:52:39.893907 kernel: usbcore: registered new interface driver usbfs Mar 7 00:52:39.893966 kernel: usbcore: registered new interface driver hub Mar 7 00:52:39.893980 kernel: usbcore: registered new device driver usb Mar 7 00:52:39.892631 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:39.895454 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:39.895769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:39.899919 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:39.907309 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:39.917676 kernel: sr 0:0:0:0: Power-on or device reset occurred Mar 7 00:52:39.925115 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:39.928331 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Mar 7 00:52:39.928583 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 7 00:52:39.932061 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Mar 7 00:52:39.935296 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:52:39.938691 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:52:39.938912 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Mar 7 00:52:39.939000 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 7 00:52:39.941593 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Mar 7 00:52:39.941822 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Mar 7 00:52:39.941917 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Mar 7 00:52:39.954349 kernel: hub 1-0:1.0: USB hub found Mar 7 00:52:39.955061 kernel: hub 1-0:1.0: 4 ports detected Mar 7 00:52:39.964075 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 7 00:52:39.964276 kernel: hub 2-0:1.0: USB hub found Mar 7 00:52:39.964372 kernel: hub 2-0:1.0: 4 ports detected Mar 7 00:52:39.971661 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:39.977422 kernel: sd 0:0:0:1: Power-on or device reset occurred Mar 7 00:52:39.977652 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Mar 7 00:52:39.977766 kernel: sd 0:0:0:1: [sda] Write Protect is off Mar 7 00:52:39.979661 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Mar 7 00:52:39.979886 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 7 00:52:39.984394 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:52:39.984449 kernel: GPT:17805311 != 80003071 Mar 7 00:52:39.984467 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:52:39.984477 kernel: GPT:17805311 != 80003071 Mar 7 00:52:39.984486 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:52:39.984495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:39.987081 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Mar 7 00:52:40.020074 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (518) Mar 7 00:52:40.032025 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Mar 7 00:52:40.033759 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (515) Mar 7 00:52:40.044855 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Mar 7 00:52:40.046902 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Mar 7 00:52:40.055926 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Mar 7 00:52:40.063488 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:52:40.067254 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:52:40.099097 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:40.100169 disk-uuid[574]: Primary Header is updated. Mar 7 00:52:40.100169 disk-uuid[574]: Secondary Entries is updated. Mar 7 00:52:40.100169 disk-uuid[574]: Secondary Header is updated. Mar 7 00:52:40.202076 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 7 00:52:40.337331 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Mar 7 00:52:40.337407 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Mar 7 00:52:40.338521 kernel: usbcore: registered new interface driver usbhid Mar 7 00:52:40.338565 kernel: usbhid: USB HID core driver Mar 7 00:52:40.445144 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Mar 7 00:52:40.581089 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Mar 7 00:52:40.634077 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Mar 7 00:52:41.121407 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 7 00:52:41.121512 disk-uuid[575]: The operation has completed successfully. Mar 7 00:52:41.179922 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:52:41.180057 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:52:41.190338 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:52:41.197638 sh[592]: Success Mar 7 00:52:41.212070 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:52:41.270084 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:52:41.286451 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:52:41.293868 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:52:41.306621 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:52:41.306723 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:41.306754 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:52:41.306780 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:52:41.308066 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:52:41.317087 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:52:41.319395 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:52:41.322012 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:52:41.330308 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:52:41.336330 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:52:41.346581 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:41.346637 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:41.346649 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:41.350059 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:41.350114 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:41.359470 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:52:41.360227 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:41.368466 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:52:41.376359 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:52:41.473279 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:52:41.482328 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:52:41.490640 ignition[664]: Ignition 2.19.0 Mar 7 00:52:41.490651 ignition[664]: Stage: fetch-offline Mar 7 00:52:41.490750 ignition[664]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:41.490763 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:41.491542 ignition[664]: parsed url from cmdline: "" Mar 7 00:52:41.491546 ignition[664]: no config URL provided Mar 7 00:52:41.491554 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:52:41.495131 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:52:41.491566 ignition[664]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:52:41.491572 ignition[664]: failed to fetch config: resource requires networking Mar 7 00:52:41.491830 ignition[664]: Ignition finished successfully Mar 7 00:52:41.506139 systemd-networkd[781]: lo: Link UP Mar 7 00:52:41.506154 systemd-networkd[781]: lo: Gained carrier Mar 7 00:52:41.507797 systemd-networkd[781]: Enumeration completed Mar 7 00:52:41.508003 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:52:41.509279 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:41.509283 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:41.510299 systemd[1]: Reached target network.target - Network. Mar 7 00:52:41.511238 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:41.511241 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:41.511938 systemd-networkd[781]: eth0: Link UP Mar 7 00:52:41.511942 systemd-networkd[781]: eth0: Gained carrier Mar 7 00:52:41.511953 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:41.519240 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:52:41.519574 systemd-networkd[781]: eth1: Link UP Mar 7 00:52:41.519577 systemd-networkd[781]: eth1: Gained carrier Mar 7 00:52:41.519589 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:41.538113 ignition[784]: Ignition 2.19.0 Mar 7 00:52:41.538136 ignition[784]: Stage: fetch Mar 7 00:52:41.538506 ignition[784]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:41.538528 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:41.538810 ignition[784]: parsed url from cmdline: "" Mar 7 00:52:41.538819 ignition[784]: no config URL provided Mar 7 00:52:41.538831 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:52:41.538852 ignition[784]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:52:41.538889 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Mar 7 00:52:41.539837 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Mar 7 00:52:41.563154 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:52:41.575159 systemd-networkd[781]: eth0: DHCPv4 address 116.202.31.117/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:52:41.740145 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Mar 7 00:52:41.746189 ignition[784]: GET result: OK Mar 7 00:52:41.746379 ignition[784]: parsing config with SHA512: 97ec8cd1e39d8702b34919e9ff8d783008ab79efafd553617c37822ca256b523340136af5f312940c14b9035ae7171f94d69e78e2f6c2d311e73756b43ed426f Mar 7 00:52:41.752624 unknown[784]: fetched base config from "system" Mar 7 00:52:41.752634 unknown[784]: fetched base config from "system" Mar 7 00:52:41.753098 ignition[784]: fetch: fetch complete Mar 7 00:52:41.752639 unknown[784]: fetched user config from "hetzner" Mar 7 00:52:41.753104 ignition[784]: fetch: fetch passed Mar 7 00:52:41.753163 ignition[784]: Ignition finished successfully Mar 7 00:52:41.757108 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:52:41.761250 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:52:41.778572 ignition[792]: Ignition 2.19.0 Mar 7 00:52:41.778582 ignition[792]: Stage: kargs Mar 7 00:52:41.778849 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:41.778863 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:41.779981 ignition[792]: kargs: kargs passed Mar 7 00:52:41.780084 ignition[792]: Ignition finished successfully Mar 7 00:52:41.782559 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:52:41.789426 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:52:41.806674 ignition[799]: Ignition 2.19.0 Mar 7 00:52:41.806706 ignition[799]: Stage: disks Mar 7 00:52:41.806954 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:41.806966 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:41.808647 ignition[799]: disks: disks passed Mar 7 00:52:41.808724 ignition[799]: Ignition finished successfully Mar 7 00:52:41.812799 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:52:41.815748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:52:41.816990 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:52:41.818673 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:52:41.819893 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:52:41.820972 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:52:41.827391 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:52:41.850431 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 7 00:52:41.856634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:52:41.866262 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:52:41.928061 kernel: EXT4-fs (sda9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:52:41.928571 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:52:41.929852 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:52:41.942220 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:52:41.948866 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:52:41.952239 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 7 00:52:41.956181 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:52:41.956223 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:52:41.958125 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:52:41.972986 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (815) Mar 7 00:52:41.973073 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:41.973087 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:41.973283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:52:41.981092 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:41.987219 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:41.987288 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:41.997565 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:52:42.034353 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:52:42.039452 coreos-metadata[817]: Mar 07 00:52:42.039 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Mar 7 00:52:42.042090 coreos-metadata[817]: Mar 07 00:52:42.041 INFO Fetch successful Mar 7 00:52:42.043227 coreos-metadata[817]: Mar 07 00:52:42.042 INFO wrote hostname ci-4081-3-6-n-2a659a64a8 to /sysroot/etc/hostname Mar 7 00:52:42.046733 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:52:42.047438 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:52:42.057105 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:52:42.063333 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:52:42.174636 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:52:42.179201 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:52:42.192528 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:52:42.194227 kernel: BTRFS info (device sda6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:42.221523 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:52:42.225304 ignition[932]: INFO : Ignition 2.19.0 Mar 7 00:52:42.225304 ignition[932]: INFO : Stage: mount Mar 7 00:52:42.225304 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:42.225304 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:42.227826 ignition[932]: INFO : mount: mount passed Mar 7 00:52:42.227826 ignition[932]: INFO : Ignition finished successfully Mar 7 00:52:42.228260 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:52:42.235246 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:52:42.307993 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:52:42.315418 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:52:42.341336 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Mar 7 00:52:42.344289 kernel: BTRFS info (device sda6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:52:42.344368 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:52:42.344380 kernel: BTRFS info (device sda6): using free space tree Mar 7 00:52:42.348388 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 7 00:52:42.348477 kernel: BTRFS info (device sda6): auto enabling async discard Mar 7 00:52:42.352865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:52:42.380477 ignition[960]: INFO : Ignition 2.19.0 Mar 7 00:52:42.380477 ignition[960]: INFO : Stage: files Mar 7 00:52:42.381623 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:42.381623 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:42.383734 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:52:42.385090 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:52:42.385090 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:52:42.390510 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:52:42.392258 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:52:42.393767 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:52:42.393543 unknown[960]: wrote ssh authorized keys file for user: core Mar 7 00:52:42.398037 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:52:42.398037 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:52:42.445161 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 7 00:52:42.528475 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:52:42.528475 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:42.533119 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 00:52:42.822448 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 7 00:52:43.056407 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:52:43.056407 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:52:43.059296 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:52:43.070023 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:52:43.070023 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:52:43.070023 ignition[960]: INFO : files: files passed Mar 7 00:52:43.070023 ignition[960]: INFO : Ignition finished successfully Mar 7 00:52:43.062465 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:52:43.070335 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:52:43.075280 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:52:43.081267 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:52:43.082157 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:52:43.094975 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:43.094975 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:43.098672 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:52:43.101059 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:52:43.102937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:52:43.110346 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:52:43.112173 systemd-networkd[781]: eth0: Gained IPv6LL Mar 7 00:52:43.112685 systemd-networkd[781]: eth1: Gained IPv6LL Mar 7 00:52:43.159238 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:52:43.160140 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:52:43.162502 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:52:43.164251 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:52:43.165261 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:52:43.175377 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:52:43.192902 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:52:43.200629 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:52:43.214904 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:43.215846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:43.217341 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:52:43.218546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:52:43.218710 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:52:43.220461 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:52:43.221131 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:52:43.222389 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:52:43.223505 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:52:43.224650 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:52:43.225832 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:52:43.226930 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:52:43.228154 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:52:43.229257 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:52:43.230421 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:52:43.231346 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:52:43.231475 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:52:43.232891 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:43.233613 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:43.234696 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:52:43.234779 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:43.235850 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:52:43.235978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:52:43.237574 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:52:43.237726 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:52:43.238987 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:52:43.239116 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:52:43.240242 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 7 00:52:43.240345 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 7 00:52:43.251531 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:52:43.253585 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:52:43.253824 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:43.257315 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:52:43.257813 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:52:43.257956 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:43.263306 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:52:43.263444 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:52:43.273522 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:52:43.275327 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:52:43.281069 ignition[1013]: INFO : Ignition 2.19.0 Mar 7 00:52:43.281069 ignition[1013]: INFO : Stage: umount Mar 7 00:52:43.281069 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:52:43.281069 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Mar 7 00:52:43.285392 ignition[1013]: INFO : umount: umount passed Mar 7 00:52:43.285392 ignition[1013]: INFO : Ignition finished successfully Mar 7 00:52:43.288414 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:52:43.288926 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:52:43.292371 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:52:43.293197 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:52:43.293304 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:52:43.294442 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:52:43.294501 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:52:43.296042 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:52:43.296124 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:52:43.297026 systemd[1]: Stopped target network.target - Network. Mar 7 00:52:43.297909 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:52:43.297972 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:52:43.299073 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:52:43.299998 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:52:43.300093 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:43.301343 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:52:43.302221 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:52:43.303086 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:52:43.303131 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:52:43.304075 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:52:43.304111 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:52:43.305153 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:52:43.305207 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:52:43.306026 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:52:43.306097 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:52:43.307422 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:52:43.310317 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:52:43.311331 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:52:43.311439 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:52:43.313308 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:52:43.313402 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:52:43.313413 systemd-networkd[781]: eth0: DHCPv6 lease lost Mar 7 00:52:43.317357 systemd-networkd[781]: eth1: DHCPv6 lease lost Mar 7 00:52:43.319203 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:52:43.319359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:52:43.320726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:52:43.320765 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:43.329322 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:52:43.329901 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:52:43.329979 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:52:43.332041 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:43.333001 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:52:43.333133 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:52:43.348162 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:52:43.348282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:43.350822 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:52:43.350882 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:43.351993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:52:43.352071 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:43.353630 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:52:43.353823 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:52:43.354896 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:52:43.355179 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:43.357429 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:52:43.357495 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:43.359245 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:52:43.359284 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:43.360561 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:52:43.360618 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:52:43.362155 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:52:43.362206 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:52:43.363683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:52:43.363733 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:52:43.372287 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:52:43.372927 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:52:43.372998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:43.375568 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:43.375630 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:43.383675 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:52:43.383813 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:52:43.386440 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:52:43.390554 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:52:43.404612 systemd[1]: Switching root. Mar 7 00:52:43.441145 systemd-journald[237]: Journal stopped Mar 7 00:52:44.455446 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 7 00:52:44.455515 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:52:44.455529 kernel: SELinux: policy capability open_perms=1 Mar 7 00:52:44.455542 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:52:44.455556 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:52:44.455565 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:52:44.455575 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:52:44.455588 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:52:44.455597 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:52:44.455607 kernel: audit: type=1403 audit(1772844763.616:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:52:44.455620 systemd[1]: Successfully loaded SELinux policy in 37.376ms. Mar 7 00:52:44.455649 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.029ms. Mar 7 00:52:44.455663 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:52:44.455674 systemd[1]: Detected virtualization kvm. Mar 7 00:52:44.455684 systemd[1]: Detected architecture arm64. Mar 7 00:52:44.455699 systemd[1]: Detected first boot. Mar 7 00:52:44.455711 systemd[1]: Hostname set to . Mar 7 00:52:44.455721 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:52:44.455732 zram_generator::config[1055]: No configuration found. Mar 7 00:52:44.455747 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:52:44.455757 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 7 00:52:44.455768 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 7 00:52:44.455778 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 7 00:52:44.455791 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:52:44.455801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:52:44.455812 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:52:44.455822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:52:44.455832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:52:44.455843 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:52:44.455853 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:52:44.455863 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:52:44.455873 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:52:44.455885 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:52:44.455896 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:52:44.455906 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:52:44.455920 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:52:44.455932 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:52:44.455942 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 7 00:52:44.455952 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:52:44.455963 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 7 00:52:44.455976 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 7 00:52:44.455987 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 7 00:52:44.455997 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:52:44.456008 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:52:44.456018 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:52:44.456028 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:52:44.456038 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:52:44.463033 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:52:44.463074 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:52:44.463087 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:52:44.463100 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:52:44.463111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:52:44.463122 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:52:44.463132 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:52:44.463143 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:52:44.463154 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:52:44.463167 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:52:44.463178 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:52:44.463189 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:52:44.463200 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:52:44.463211 systemd[1]: Reached target machines.target - Containers. Mar 7 00:52:44.463222 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:52:44.463232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:44.463243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:52:44.463254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:52:44.463265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:44.463276 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:52:44.463290 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:44.463303 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:52:44.463313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:44.463326 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:52:44.463338 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 7 00:52:44.463349 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 7 00:52:44.463359 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 7 00:52:44.463379 systemd[1]: Stopped systemd-fsck-usr.service. Mar 7 00:52:44.463392 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:52:44.463403 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:52:44.463413 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:52:44.463424 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:52:44.463436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:52:44.463447 systemd[1]: verity-setup.service: Deactivated successfully. Mar 7 00:52:44.463458 systemd[1]: Stopped verity-setup.service. Mar 7 00:52:44.463468 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:52:44.463479 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:52:44.463492 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:52:44.463503 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:52:44.463514 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:52:44.463524 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:52:44.463535 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:52:44.463545 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:52:44.463556 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:52:44.463567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:44.463580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:44.463591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:44.463602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:44.463612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:52:44.463624 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:52:44.463648 kernel: fuse: init (API version 7.39) Mar 7 00:52:44.463707 systemd-journald[1122]: Collecting audit messages is disabled. Mar 7 00:52:44.463738 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:52:44.463750 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:52:44.463761 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:52:44.463772 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:52:44.463782 kernel: loop: module loaded Mar 7 00:52:44.463792 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:52:44.463803 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:52:44.463816 systemd-journald[1122]: Journal started Mar 7 00:52:44.463839 systemd-journald[1122]: Runtime Journal (/run/log/journal/468eee0d36c2448394292e7fa7b26813) is 8.0M, max 76.6M, 68.6M free. Mar 7 00:52:44.146795 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:52:44.171212 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 7 00:52:44.171732 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 7 00:52:44.471151 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:52:44.473323 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:44.474181 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:44.480440 kernel: ACPI: bus type drm_connector registered Mar 7 00:52:44.480813 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:52:44.481492 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:52:44.485530 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:52:44.494816 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:52:44.500239 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:52:44.502162 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:52:44.502206 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:52:44.504949 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:52:44.517421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:52:44.522357 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:52:44.523104 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:44.526906 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:52:44.531310 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:52:44.532013 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:44.537415 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:52:44.538110 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:44.543324 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:52:44.547725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:52:44.553125 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:52:44.560764 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:52:44.561821 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:52:44.578438 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:52:44.580447 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:52:44.590766 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:52:44.594774 systemd-journald[1122]: Time spent on flushing to /var/log/journal/468eee0d36c2448394292e7fa7b26813 is 57.834ms for 1126 entries. Mar 7 00:52:44.594774 systemd-journald[1122]: System Journal (/var/log/journal/468eee0d36c2448394292e7fa7b26813) is 8.0M, max 584.8M, 576.8M free. Mar 7 00:52:44.670417 systemd-journald[1122]: Received client request to flush runtime journal. Mar 7 00:52:44.670512 kernel: loop0: detected capacity change from 0 to 114328 Mar 7 00:52:44.670539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:52:44.595300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:52:44.606777 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:52:44.668757 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 00:52:44.671294 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:52:44.673020 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:52:44.681132 kernel: loop1: detected capacity change from 0 to 209336 Mar 7 00:52:44.682138 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:52:44.688174 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:52:44.699986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:52:44.738587 kernel: loop2: detected capacity change from 0 to 8 Mar 7 00:52:44.753082 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 00:52:44.753489 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 7 00:52:44.765092 kernel: loop3: detected capacity change from 0 to 114432 Mar 7 00:52:44.767699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:52:44.803115 kernel: loop4: detected capacity change from 0 to 114328 Mar 7 00:52:44.818578 kernel: loop5: detected capacity change from 0 to 209336 Mar 7 00:52:44.847097 kernel: loop6: detected capacity change from 0 to 8 Mar 7 00:52:44.850202 kernel: loop7: detected capacity change from 0 to 114432 Mar 7 00:52:44.864146 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Mar 7 00:52:44.864721 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 7 00:52:44.869975 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:52:44.869996 systemd[1]: Reloading... Mar 7 00:52:45.001082 zram_generator::config[1221]: No configuration found. Mar 7 00:52:45.078486 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:52:45.159643 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:52:45.212915 systemd[1]: Reloading finished in 342 ms. Mar 7 00:52:45.238935 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:52:45.244093 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:52:45.255745 systemd[1]: Starting ensure-sysext.service... Mar 7 00:52:45.258253 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:52:45.272355 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:52:45.272381 systemd[1]: Reloading... Mar 7 00:52:45.294343 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:52:45.294793 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:52:45.295539 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:52:45.295845 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 7 00:52:45.295902 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 7 00:52:45.299483 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:52:45.299497 systemd-tmpfiles[1259]: Skipping /boot Mar 7 00:52:45.314290 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:52:45.314303 systemd-tmpfiles[1259]: Skipping /boot Mar 7 00:52:45.358153 zram_generator::config[1286]: No configuration found. Mar 7 00:52:45.463806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:52:45.516736 systemd[1]: Reloading finished in 243 ms. Mar 7 00:52:45.538535 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:52:45.539683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:52:45.555442 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:52:45.560446 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:52:45.571352 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:52:45.575398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:52:45.578896 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:52:45.587333 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:52:45.596557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:45.607587 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:45.612537 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:45.629557 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:45.630565 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:45.635885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:45.636139 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:45.645389 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:52:45.647024 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:52:45.654680 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:52:45.658450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:45.667416 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:52:45.668608 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:45.672184 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Mar 7 00:52:45.672376 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:52:45.678200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:45.680152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:45.683094 systemd[1]: Finished ensure-sysext.service. Mar 7 00:52:45.684293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:45.684447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:45.687778 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:52:45.687971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:52:45.694794 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:45.695148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:45.699193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:45.699293 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:45.710207 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 7 00:52:45.716162 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:52:45.722270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:52:45.731224 augenrules[1363]: No rules Mar 7 00:52:45.732816 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:52:45.740752 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:52:45.782094 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:52:45.783541 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:52:45.784835 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:52:45.872728 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 7 00:52:45.941877 systemd-resolved[1335]: Positive Trust Anchors: Mar 7 00:52:45.941900 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:52:45.941937 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:52:45.954136 systemd-resolved[1335]: Using system hostname 'ci-4081-3-6-n-2a659a64a8'. Mar 7 00:52:45.954506 systemd-networkd[1365]: lo: Link UP Mar 7 00:52:45.954513 systemd-networkd[1365]: lo: Gained carrier Mar 7 00:52:45.957230 systemd-networkd[1365]: Enumeration completed Mar 7 00:52:45.957371 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:52:45.961454 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:45.961486 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:45.967275 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:45.967292 systemd-networkd[1365]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:52:45.968004 systemd-networkd[1365]: eth0: Link UP Mar 7 00:52:45.968015 systemd-networkd[1365]: eth0: Gained carrier Mar 7 00:52:45.968033 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:45.978300 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:52:45.980189 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:52:45.983411 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 7 00:52:45.986111 systemd[1]: Reached target network.target - Network. Mar 7 00:52:45.986726 systemd-networkd[1365]: eth1: Link UP Mar 7 00:52:45.986739 systemd-networkd[1365]: eth1: Gained carrier Mar 7 00:52:45.986765 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:45.987141 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:52:45.988118 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:52:46.024385 systemd-networkd[1365]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Mar 7 00:52:46.026762 kernel: mousedev: PS/2 mouse device common for all mice Mar 7 00:52:46.025148 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection. Mar 7 00:52:46.033189 systemd-networkd[1365]: eth0: DHCPv4 address 116.202.31.117/32, gateway 172.31.1.1 acquired from 172.31.1.1 Mar 7 00:52:46.046953 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:46.055745 systemd-networkd[1365]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:52:46.106266 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1372) Mar 7 00:52:46.114020 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Mar 7 00:52:46.114187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:52:46.121224 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:52:46.129253 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:52:46.133900 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:52:46.135267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:52:46.135307 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:52:46.135703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:52:46.136526 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:52:46.138599 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:52:46.138777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:52:46.153973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Mar 7 00:52:46.159096 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Mar 7 00:52:46.159186 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 7 00:52:46.159202 kernel: [drm] features: -context_init Mar 7 00:52:46.160144 kernel: [drm] number of scanouts: 1 Mar 7 00:52:46.160195 kernel: [drm] number of cap sets: 0 Mar 7 00:52:46.160385 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:52:46.161066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:52:46.165991 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:52:46.166236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:52:46.168339 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:52:46.185162 systemd-timesyncd[1357]: Contacted time server 212.18.3.18:123 (0.flatcar.pool.ntp.org). Mar 7 00:52:46.186172 systemd-timesyncd[1357]: Initial clock synchronization to Sat 2026-03-07 00:52:46.021418 UTC. Mar 7 00:52:46.199155 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:52:46.210073 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Mar 7 00:52:46.224292 kernel: Console: switching to colour frame buffer device 160x50 Mar 7 00:52:46.230678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:46.237131 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 7 00:52:46.242937 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:52:46.243147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:46.251381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:52:46.314931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:52:46.371952 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:52:46.384430 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:52:46.399407 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:52:46.430173 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:52:46.431785 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:52:46.432994 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:52:46.434414 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:52:46.435327 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:52:46.436313 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:52:46.437197 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:52:46.437942 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:52:46.438793 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:52:46.438902 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:52:46.439551 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:52:46.441636 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:52:46.444277 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:52:46.453004 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:52:46.455806 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:52:46.457456 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:52:46.458363 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:52:46.459571 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:52:46.460958 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:52:46.461020 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:52:46.466313 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:52:46.470937 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:52:46.476502 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:52:46.483276 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:52:46.485209 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:52:46.488308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:52:46.489397 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:52:46.492140 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:52:46.497311 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:52:46.505235 jq[1446]: false Mar 7 00:52:46.501331 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Mar 7 00:52:46.508380 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:52:46.512471 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:52:46.518088 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:52:46.519612 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:52:46.520185 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:52:46.525800 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:52:46.531256 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:52:46.536007 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:52:46.537358 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:52:46.564627 extend-filesystems[1447]: Found loop4 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found loop5 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found loop6 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found loop7 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found sda Mar 7 00:52:46.564627 extend-filesystems[1447]: Found sda1 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found sda2 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found sda3 Mar 7 00:52:46.564627 extend-filesystems[1447]: Found usr Mar 7 00:52:46.591660 extend-filesystems[1447]: Found sda4 Mar 7 00:52:46.591660 extend-filesystems[1447]: Found sda6 Mar 7 00:52:46.591660 extend-filesystems[1447]: Found sda7 Mar 7 00:52:46.591660 extend-filesystems[1447]: Found sda9 Mar 7 00:52:46.591660 extend-filesystems[1447]: Checking size of /dev/sda9 Mar 7 00:52:46.569810 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:52:46.600259 coreos-metadata[1444]: Mar 07 00:52:46.566 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Mar 7 00:52:46.600259 coreos-metadata[1444]: Mar 07 00:52:46.568 INFO Fetch successful Mar 7 00:52:46.600259 coreos-metadata[1444]: Mar 07 00:52:46.568 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Mar 7 00:52:46.600259 coreos-metadata[1444]: Mar 07 00:52:46.572 INFO Fetch successful Mar 7 00:52:46.573523 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:52:46.574810 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:52:46.615347 jq[1457]: true Mar 7 00:52:46.632486 extend-filesystems[1447]: Resized partition /dev/sda9 Mar 7 00:52:46.632265 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:52:46.632967 dbus-daemon[1445]: [system] SELinux support is enabled Mar 7 00:52:46.633304 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:52:46.642273 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:52:46.652256 extend-filesystems[1489]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:52:46.642408 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:52:46.644261 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:52:46.644284 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:52:46.654823 tar[1464]: linux-arm64/LICENSE Mar 7 00:52:46.654823 tar[1464]: linux-arm64/helm Mar 7 00:52:46.667265 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Mar 7 00:52:46.669682 update_engine[1456]: I20260307 00:52:46.668999 1456 main.cc:92] Flatcar Update Engine starting Mar 7 00:52:46.677844 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:52:46.681525 update_engine[1456]: I20260307 00:52:46.680311 1456 update_check_scheduler.cc:74] Next update check in 7m3s Mar 7 00:52:46.687979 jq[1485]: true Mar 7 00:52:46.687346 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:52:46.688850 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:52:46.691101 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:52:46.788762 systemd-logind[1455]: New seat seat0. Mar 7 00:52:46.796156 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:52:46.796183 systemd-logind[1455]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Mar 7 00:52:46.796437 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:52:46.811482 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:52:46.815398 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:52:46.840086 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1378) Mar 7 00:52:46.851634 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Mar 7 00:52:46.877877 extend-filesystems[1489]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 7 00:52:46.877877 extend-filesystems[1489]: old_desc_blocks = 1, new_desc_blocks = 5 Mar 7 00:52:46.877877 extend-filesystems[1489]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Mar 7 00:52:46.886935 extend-filesystems[1447]: Resized filesystem in /dev/sda9 Mar 7 00:52:46.886935 extend-filesystems[1447]: Found sr0 Mar 7 00:52:46.882455 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:52:46.899409 bash[1518]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:52:46.882767 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:52:46.892099 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:52:46.900689 systemd[1]: Starting sshkeys.service... Mar 7 00:52:46.919394 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:52:46.926438 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:52:47.022838 containerd[1473]: time="2026-03-07T00:52:47.022209334Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:52:47.035522 coreos-metadata[1523]: Mar 07 00:52:47.035 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Mar 7 00:52:47.038003 coreos-metadata[1523]: Mar 07 00:52:47.037 INFO Fetch successful Mar 7 00:52:47.040655 unknown[1523]: wrote ssh authorized keys file for user: core Mar 7 00:52:47.073710 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:52:47.077480 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:52:47.082871 systemd[1]: Finished sshkeys.service. Mar 7 00:52:47.106305 containerd[1473]: time="2026-03-07T00:52:47.106020302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114251927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114309642Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114328842Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114498970Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114519188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114585875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114598374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114937926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114956224Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114972680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:47.115624 containerd[1473]: time="2026-03-07T00:52:47.114983533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.117088 containerd[1473]: time="2026-03-07T00:52:47.117037257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.118064 containerd[1473]: time="2026-03-07T00:52:47.117505991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:52:47.118352 containerd[1473]: time="2026-03-07T00:52:47.118329518Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:52:47.118697 containerd[1473]: time="2026-03-07T00:52:47.118678434Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:52:47.118990 containerd[1473]: time="2026-03-07T00:52:47.118970261Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:52:47.119323 containerd[1473]: time="2026-03-07T00:52:47.119301623Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:52:47.125062 containerd[1473]: time="2026-03-07T00:52:47.124971060Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:52:47.125814 containerd[1473]: time="2026-03-07T00:52:47.125787769Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:52:47.125940 containerd[1473]: time="2026-03-07T00:52:47.125926239Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:52:47.126026 containerd[1473]: time="2026-03-07T00:52:47.126013849Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:52:47.126239 containerd[1473]: time="2026-03-07T00:52:47.126070820Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:52:47.126985 containerd[1473]: time="2026-03-07T00:52:47.126957704Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:52:47.128828 containerd[1473]: time="2026-03-07T00:52:47.128176186Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:52:47.128976 containerd[1473]: time="2026-03-07T00:52:47.128803803Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:52:47.129375 containerd[1473]: time="2026-03-07T00:52:47.129023457Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:52:47.129464 containerd[1473]: time="2026-03-07T00:52:47.129449561Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:52:47.129529 containerd[1473]: time="2026-03-07T00:52:47.129518169Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.129612 containerd[1473]: time="2026-03-07T00:52:47.129569928Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130041 containerd[1473]: time="2026-03-07T00:52:47.129720073Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130041 containerd[1473]: time="2026-03-07T00:52:47.129745463Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130041 containerd[1473]: time="2026-03-07T00:52:47.129763683Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130149 containerd[1473]: time="2026-03-07T00:52:47.130134462Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130210 containerd[1473]: time="2026-03-07T00:52:47.130188494Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130276 containerd[1473]: time="2026-03-07T00:52:47.130249892Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:52:47.130731 containerd[1473]: time="2026-03-07T00:52:47.130415318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.130731 containerd[1473]: time="2026-03-07T00:52:47.130442824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.130731 containerd[1473]: time="2026-03-07T00:52:47.130456969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.130841 containerd[1473]: time="2026-03-07T00:52:47.130826377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131063 containerd[1473]: time="2026-03-07T00:52:47.130952268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131063 containerd[1473]: time="2026-03-07T00:52:47.130988825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131063 containerd[1473]: time="2026-03-07T00:52:47.131007593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131063 containerd[1473]: time="2026-03-07T00:52:47.131024559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131039879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131562959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131578318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131589994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131615149Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131633212Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131669181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131690300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.131787 containerd[1473]: time="2026-03-07T00:52:47.131704131Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.132981973Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133016493Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133028835Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133061983Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133074991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133089763Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133101909Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:52:47.133309 containerd[1473]: time="2026-03-07T00:52:47.133113468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:52:47.135113 containerd[1473]: time="2026-03-07T00:52:47.134521669Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:52:47.135113 containerd[1473]: time="2026-03-07T00:52:47.134764401Z" level=info msg="Connect containerd service" Mar 7 00:52:47.135660 containerd[1473]: time="2026-03-07T00:52:47.135331168Z" level=info msg="using legacy CRI server" Mar 7 00:52:47.135815 containerd[1473]: time="2026-03-07T00:52:47.135349388Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:52:47.136372 containerd[1473]: time="2026-03-07T00:52:47.135950400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:52:47.139101 containerd[1473]: time="2026-03-07T00:52:47.139062155Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:52:47.139594 containerd[1473]: time="2026-03-07T00:52:47.139390500Z" level=info msg="Start subscribing containerd event" Mar 7 00:52:47.139594 containerd[1473]: time="2026-03-07T00:52:47.139445551Z" level=info msg="Start recovering state" Mar 7 00:52:47.140687 containerd[1473]: time="2026-03-07T00:52:47.140029010Z" level=info msg="Start event monitor" Mar 7 00:52:47.140687 containerd[1473]: time="2026-03-07T00:52:47.140077556Z" level=info msg="Start snapshots syncer" Mar 7 00:52:47.140687 containerd[1473]: time="2026-03-07T00:52:47.140089507Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:52:47.140687 containerd[1473]: time="2026-03-07T00:52:47.140097618Z" level=info msg="Start streaming server" Mar 7 00:52:47.142326 containerd[1473]: time="2026-03-07T00:52:47.142300076Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:52:47.143307 containerd[1473]: time="2026-03-07T00:52:47.143280919Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:52:47.144291 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:52:47.147153 containerd[1473]: time="2026-03-07T00:52:47.147122281Z" level=info msg="containerd successfully booted in 0.132458s" Mar 7 00:52:47.205170 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:52:47.420406 tar[1464]: linux-arm64/README.md Mar 7 00:52:47.435111 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:52:47.461366 systemd-networkd[1365]: eth0: Gained IPv6LL Mar 7 00:52:47.471030 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:52:47.472892 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:52:47.484280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:52:47.493510 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:52:47.533106 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:52:47.576916 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:52:47.602625 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:52:47.613528 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:52:47.623107 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:52:47.625111 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:52:47.635970 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:52:47.647277 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:52:47.653185 systemd-networkd[1365]: eth1: Gained IPv6LL Mar 7 00:52:47.657553 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:52:47.666484 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 7 00:52:47.667503 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:52:48.312241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:52:48.313708 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:52:48.316188 systemd[1]: Startup finished in 835ms (kernel) + 4.914s (initrd) + 4.737s (userspace) = 10.487s. Mar 7 00:52:48.325593 (kubelet)[1574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:52:48.852393 kubelet[1574]: E0307 00:52:48.852337 1574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:52:48.855957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:52:48.856525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:52:59.106895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:52:59.123482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:52:59.242149 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:52:59.253635 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:52:59.309212 kubelet[1592]: E0307 00:52:59.309147 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:52:59.313624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:52:59.314106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:09.564524 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:53:09.577571 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:09.711541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:09.721627 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:09.771412 kubelet[1607]: E0307 00:53:09.771360 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:09.774691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:09.774851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:13.202762 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:53:13.210575 systemd[1]: Started sshd@0-116.202.31.117:22-20.161.92.111:44558.service - OpenSSH per-connection server daemon (20.161.92.111:44558). Mar 7 00:53:13.804466 sshd[1614]: Accepted publickey for core from 20.161.92.111 port 44558 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:13.808014 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:13.822496 systemd-logind[1455]: New session 1 of user core. Mar 7 00:53:13.825250 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:53:13.830373 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:53:13.847366 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:53:13.855479 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:53:13.860684 (systemd)[1618]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:53:13.991112 systemd[1618]: Queued start job for default target default.target. Mar 7 00:53:14.002186 systemd[1618]: Created slice app.slice - User Application Slice. Mar 7 00:53:14.002473 systemd[1618]: Reached target paths.target - Paths. Mar 7 00:53:14.002505 systemd[1618]: Reached target timers.target - Timers. Mar 7 00:53:14.005171 systemd[1618]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:53:14.026098 systemd[1618]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:53:14.026379 systemd[1618]: Reached target sockets.target - Sockets. Mar 7 00:53:14.026488 systemd[1618]: Reached target basic.target - Basic System. Mar 7 00:53:14.026620 systemd[1618]: Reached target default.target - Main User Target. Mar 7 00:53:14.026726 systemd[1618]: Startup finished in 157ms. Mar 7 00:53:14.027376 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:53:14.040709 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:53:14.484813 systemd[1]: Started sshd@1-116.202.31.117:22-20.161.92.111:44572.service - OpenSSH per-connection server daemon (20.161.92.111:44572). Mar 7 00:53:15.072676 sshd[1629]: Accepted publickey for core from 20.161.92.111 port 44572 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:15.076732 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:15.088280 systemd-logind[1455]: New session 2 of user core. Mar 7 00:53:15.096350 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:53:15.492722 sshd[1629]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:15.498735 systemd[1]: sshd@1-116.202.31.117:22-20.161.92.111:44572.service: Deactivated successfully. Mar 7 00:53:15.501843 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:53:15.502785 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:53:15.504059 systemd-logind[1455]: Removed session 2. Mar 7 00:53:15.610115 systemd[1]: Started sshd@2-116.202.31.117:22-20.161.92.111:44574.service - OpenSSH per-connection server daemon (20.161.92.111:44574). Mar 7 00:53:16.207458 sshd[1636]: Accepted publickey for core from 20.161.92.111 port 44574 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:16.210239 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:16.216150 systemd-logind[1455]: New session 3 of user core. Mar 7 00:53:16.226555 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:53:16.627013 sshd[1636]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:16.632807 systemd[1]: sshd@2-116.202.31.117:22-20.161.92.111:44574.service: Deactivated successfully. Mar 7 00:53:16.635956 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:53:16.637198 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:53:16.638653 systemd-logind[1455]: Removed session 3. Mar 7 00:53:16.742604 systemd[1]: Started sshd@3-116.202.31.117:22-20.161.92.111:44582.service - OpenSSH per-connection server daemon (20.161.92.111:44582). Mar 7 00:53:17.327919 sshd[1643]: Accepted publickey for core from 20.161.92.111 port 44582 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:17.330853 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:17.336514 systemd-logind[1455]: New session 4 of user core. Mar 7 00:53:17.345574 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:53:17.747770 sshd[1643]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:17.752959 systemd[1]: sshd@3-116.202.31.117:22-20.161.92.111:44582.service: Deactivated successfully. Mar 7 00:53:17.755391 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:53:17.757029 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:53:17.758649 systemd-logind[1455]: Removed session 4. Mar 7 00:53:17.858167 systemd[1]: Started sshd@4-116.202.31.117:22-20.161.92.111:44598.service - OpenSSH per-connection server daemon (20.161.92.111:44598). Mar 7 00:53:18.463198 sshd[1650]: Accepted publickey for core from 20.161.92.111 port 44598 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:18.464714 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:18.473368 systemd-logind[1455]: New session 5 of user core. Mar 7 00:53:18.484381 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:53:18.796912 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:53:18.797262 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:18.814921 sudo[1653]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:18.909452 sshd[1650]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:18.914858 systemd[1]: sshd@4-116.202.31.117:22-20.161.92.111:44598.service: Deactivated successfully. Mar 7 00:53:18.917804 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:53:18.919928 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:53:18.921350 systemd-logind[1455]: Removed session 5. Mar 7 00:53:19.015400 systemd[1]: Started sshd@5-116.202.31.117:22-20.161.92.111:44600.service - OpenSSH per-connection server daemon (20.161.92.111:44600). Mar 7 00:53:19.613116 sshd[1658]: Accepted publickey for core from 20.161.92.111 port 44600 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:19.614759 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:19.621304 systemd-logind[1455]: New session 6 of user core. Mar 7 00:53:19.628425 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:53:19.941188 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:53:19.941509 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:19.943570 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 7 00:53:19.948284 sudo[1662]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:19.952401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:19.956821 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:53:19.957242 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:19.971378 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:19.986961 auditctl[1668]: No rules Mar 7 00:53:19.989128 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:53:19.989349 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:19.998428 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:53:20.035103 augenrules[1686]: No rules Mar 7 00:53:20.038081 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:53:20.039201 sudo[1661]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:20.101383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:20.106894 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:20.133646 sshd[1658]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:20.140189 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:53:20.140547 systemd[1]: sshd@5-116.202.31.117:22-20.161.92.111:44600.service: Deactivated successfully. Mar 7 00:53:20.143690 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:53:20.145661 systemd-logind[1455]: Removed session 6. Mar 7 00:53:20.160682 kubelet[1696]: E0307 00:53:20.160252 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:20.163665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:20.163833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:20.245475 systemd[1]: Started sshd@6-116.202.31.117:22-20.161.92.111:36448.service - OpenSSH per-connection server daemon (20.161.92.111:36448). Mar 7 00:53:20.837003 sshd[1707]: Accepted publickey for core from 20.161.92.111 port 36448 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:53:20.840679 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:53:20.845893 systemd-logind[1455]: New session 7 of user core. Mar 7 00:53:20.855407 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:53:21.165514 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:53:21.166259 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:53:21.472624 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:53:21.481689 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:53:21.741116 dockerd[1725]: time="2026-03-07T00:53:21.740296507Z" level=info msg="Starting up" Mar 7 00:53:21.836534 systemd[1]: var-lib-docker-metacopy\x2dcheck905731578-merged.mount: Deactivated successfully. Mar 7 00:53:21.845848 dockerd[1725]: time="2026-03-07T00:53:21.845802428Z" level=info msg="Loading containers: start." Mar 7 00:53:21.953096 kernel: Initializing XFRM netlink socket Mar 7 00:53:22.036290 systemd-networkd[1365]: docker0: Link UP Mar 7 00:53:22.052922 dockerd[1725]: time="2026-03-07T00:53:22.052824251Z" level=info msg="Loading containers: done." Mar 7 00:53:22.074556 dockerd[1725]: time="2026-03-07T00:53:22.074458910Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:53:22.074826 dockerd[1725]: time="2026-03-07T00:53:22.074633429Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:53:22.074826 dockerd[1725]: time="2026-03-07T00:53:22.074814229Z" level=info msg="Daemon has completed initialization" Mar 7 00:53:22.120074 dockerd[1725]: time="2026-03-07T00:53:22.119557901Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:53:22.120306 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:53:22.616506 containerd[1473]: time="2026-03-07T00:53:22.616086042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 00:53:23.205110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800120832.mount: Deactivated successfully. Mar 7 00:53:24.461880 containerd[1473]: time="2026-03-07T00:53:24.460718306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:24.463576 containerd[1473]: time="2026-03-07T00:53:24.463529819Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390272" Mar 7 00:53:24.465117 containerd[1473]: time="2026-03-07T00:53:24.465077815Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:24.470206 containerd[1473]: time="2026-03-07T00:53:24.470155202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:24.471686 containerd[1473]: time="2026-03-07T00:53:24.471641398Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 1.855501996s" Mar 7 00:53:24.471842 containerd[1473]: time="2026-03-07T00:53:24.471823718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 00:53:24.472602 containerd[1473]: time="2026-03-07T00:53:24.472540156Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 00:53:25.865484 containerd[1473]: time="2026-03-07T00:53:25.865386433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:25.867648 containerd[1473]: time="2026-03-07T00:53:25.867571108Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552126" Mar 7 00:53:25.870388 containerd[1473]: time="2026-03-07T00:53:25.869128344Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:25.879561 containerd[1473]: time="2026-03-07T00:53:25.878989720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:25.880963 containerd[1473]: time="2026-03-07T00:53:25.880898236Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.407949681s" Mar 7 00:53:25.881076 containerd[1473]: time="2026-03-07T00:53:25.880961596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 00:53:25.882194 containerd[1473]: time="2026-03-07T00:53:25.882133953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 00:53:27.004078 containerd[1473]: time="2026-03-07T00:53:27.002173090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:27.004608 containerd[1473]: time="2026-03-07T00:53:27.004563765Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301325" Mar 7 00:53:27.004990 containerd[1473]: time="2026-03-07T00:53:27.004937044Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:27.009687 containerd[1473]: time="2026-03-07T00:53:27.009640394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:27.011238 containerd[1473]: time="2026-03-07T00:53:27.011196631Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.128992238s" Mar 7 00:53:27.011388 containerd[1473]: time="2026-03-07T00:53:27.011371830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 00:53:27.011935 containerd[1473]: time="2026-03-07T00:53:27.011884349Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 00:53:27.878591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount288081238.mount: Deactivated successfully. Mar 7 00:53:28.279681 containerd[1473]: time="2026-03-07T00:53:28.279508987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:28.280943 containerd[1473]: time="2026-03-07T00:53:28.280905904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148896" Mar 7 00:53:28.282073 containerd[1473]: time="2026-03-07T00:53:28.281853302Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:28.284524 containerd[1473]: time="2026-03-07T00:53:28.284388057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:28.285153 containerd[1473]: time="2026-03-07T00:53:28.285111775Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.272956986s" Mar 7 00:53:28.285226 containerd[1473]: time="2026-03-07T00:53:28.285153935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 00:53:28.285722 containerd[1473]: time="2026-03-07T00:53:28.285643094Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 00:53:28.816232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3484696926.mount: Deactivated successfully. Mar 7 00:53:29.670908 containerd[1473]: time="2026-03-07T00:53:29.670776031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.673813 containerd[1473]: time="2026-03-07T00:53:29.673749545Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Mar 7 00:53:29.675123 containerd[1473]: time="2026-03-07T00:53:29.674594144Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.678343 containerd[1473]: time="2026-03-07T00:53:29.678303096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:29.679760 containerd[1473]: time="2026-03-07T00:53:29.679711614Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.39402892s" Mar 7 00:53:29.679760 containerd[1473]: time="2026-03-07T00:53:29.679757254Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 00:53:29.680857 containerd[1473]: time="2026-03-07T00:53:29.680809451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 00:53:30.141542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007105134.mount: Deactivated successfully. Mar 7 00:53:30.150510 containerd[1473]: time="2026-03-07T00:53:30.150312388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:30.152606 containerd[1473]: time="2026-03-07T00:53:30.152532904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Mar 7 00:53:30.155491 containerd[1473]: time="2026-03-07T00:53:30.155341179Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:30.159669 containerd[1473]: time="2026-03-07T00:53:30.159594571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:30.160568 containerd[1473]: time="2026-03-07T00:53:30.160198810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 479.348159ms" Mar 7 00:53:30.160568 containerd[1473]: time="2026-03-07T00:53:30.160235050Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 00:53:30.160739 containerd[1473]: time="2026-03-07T00:53:30.160704049Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 00:53:30.335946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 7 00:53:30.348457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:30.490713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:30.496611 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:53:30.549020 kubelet[2004]: E0307 00:53:30.548840 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:53:30.551726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:53:30.551942 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:53:30.682078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077872725.mount: Deactivated successfully. Mar 7 00:53:31.635190 containerd[1473]: time="2026-03-07T00:53:31.635138531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:31.636485 containerd[1473]: time="2026-03-07T00:53:31.636440889Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885878" Mar 7 00:53:31.637721 containerd[1473]: time="2026-03-07T00:53:31.637332967Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:31.641173 containerd[1473]: time="2026-03-07T00:53:31.641128760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:31.643115 containerd[1473]: time="2026-03-07T00:53:31.642948597Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.482213228s" Mar 7 00:53:31.643115 containerd[1473]: time="2026-03-07T00:53:31.642995757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 00:53:31.840086 update_engine[1456]: I20260307 00:53:31.839107 1456 update_attempter.cc:509] Updating boot flags... Mar 7 00:53:31.900058 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2104) Mar 7 00:53:35.922367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:35.930755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:35.969275 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-7.scope)... Mar 7 00:53:35.969471 systemd[1]: Reloading... Mar 7 00:53:36.098207 zram_generator::config[2160]: No configuration found. Mar 7 00:53:36.201685 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:36.279569 systemd[1]: Reloading finished in 309 ms. Mar 7 00:53:36.331548 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:53:36.331636 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:53:36.332166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:36.336467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:36.463154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:36.476495 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:53:36.531040 kubelet[2205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:53:36.531040 kubelet[2205]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:53:36.531040 kubelet[2205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:53:36.531465 kubelet[2205]: I0307 00:53:36.531113 2205 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:53:36.852101 kubelet[2205]: I0307 00:53:36.851085 2205 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:53:36.852101 kubelet[2205]: I0307 00:53:36.851146 2205 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:53:36.852101 kubelet[2205]: I0307 00:53:36.851443 2205 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:53:36.882355 kubelet[2205]: E0307 00:53:36.882291 2205 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://116.202.31.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:53:36.882530 kubelet[2205]: I0307 00:53:36.882402 2205 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:53:36.896691 kubelet[2205]: E0307 00:53:36.896205 2205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:53:36.896691 kubelet[2205]: I0307 00:53:36.896356 2205 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:53:36.900868 kubelet[2205]: I0307 00:53:36.900813 2205 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:53:36.903142 kubelet[2205]: I0307 00:53:36.902521 2205 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:53:36.903142 kubelet[2205]: I0307 00:53:36.902578 2205 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-2a659a64a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:53:36.903142 kubelet[2205]: I0307 00:53:36.902760 2205 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:53:36.903142 kubelet[2205]: I0307 00:53:36.902769 2205 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:53:36.903142 kubelet[2205]: I0307 00:53:36.902983 2205 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:53:36.907145 kubelet[2205]: I0307 00:53:36.907104 2205 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:53:36.910254 kubelet[2205]: I0307 00:53:36.908518 2205 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:53:36.910254 kubelet[2205]: I0307 00:53:36.909402 2205 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:53:36.910254 kubelet[2205]: I0307 00:53:36.909437 2205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:53:36.915701 kubelet[2205]: E0307 00:53:36.915643 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://116.202.31.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-2a659a64a8&limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:53:36.916370 kubelet[2205]: I0307 00:53:36.916308 2205 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:53:36.918600 kubelet[2205]: I0307 00:53:36.917551 2205 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:53:36.918600 kubelet[2205]: W0307 00:53:36.917727 2205 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:53:36.924766 kubelet[2205]: I0307 00:53:36.924741 2205 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:53:36.924991 kubelet[2205]: I0307 00:53:36.924978 2205 server.go:1289] "Started kubelet" Mar 7 00:53:36.927191 kubelet[2205]: E0307 00:53:36.927007 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://116.202.31.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:53:36.927191 kubelet[2205]: I0307 00:53:36.927128 2205 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:53:36.928721 kubelet[2205]: I0307 00:53:36.928661 2205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:53:36.929201 kubelet[2205]: I0307 00:53:36.929182 2205 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:53:36.932446 kubelet[2205]: I0307 00:53:36.929865 2205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:53:36.937154 kubelet[2205]: E0307 00:53:36.934615 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://116.202.31.117:6443/api/v1/namespaces/default/events\": dial tcp 116.202.31.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-2a659a64a8.189a68f7d2b419b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-2a659a64a8,UID:ci-4081-3-6-n-2a659a64a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-2a659a64a8,},FirstTimestamp:2026-03-07 00:53:36.924940729 +0000 UTC m=+0.441095631,LastTimestamp:2026-03-07 00:53:36.924940729 +0000 UTC m=+0.441095631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-2a659a64a8,}" Mar 7 00:53:36.937356 kubelet[2205]: I0307 00:53:36.937236 2205 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:53:36.940065 kubelet[2205]: I0307 00:53:36.939253 2205 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:53:36.942149 kubelet[2205]: I0307 00:53:36.941246 2205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:53:36.947591 kubelet[2205]: I0307 00:53:36.941335 2205 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:53:36.948208 kubelet[2205]: I0307 00:53:36.941359 2205 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:53:36.948327 kubelet[2205]: E0307 00:53:36.941519 2205 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" Mar 7 00:53:36.948442 kubelet[2205]: I0307 00:53:36.948430 2205 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:53:36.949099 kubelet[2205]: E0307 00:53:36.949061 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.31.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-2a659a64a8?timeout=10s\": dial tcp 116.202.31.117:6443: connect: connection refused" interval="200ms" Mar 7 00:53:36.949756 kubelet[2205]: I0307 00:53:36.949736 2205 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:53:36.949928 kubelet[2205]: I0307 00:53:36.949912 2205 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:53:36.951823 kubelet[2205]: E0307 00:53:36.951786 2205 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:53:36.952596 kubelet[2205]: I0307 00:53:36.952576 2205 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:53:36.962432 kubelet[2205]: I0307 00:53:36.961923 2205 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:53:36.962432 kubelet[2205]: I0307 00:53:36.961982 2205 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:53:36.962432 kubelet[2205]: I0307 00:53:36.962012 2205 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:53:36.962432 kubelet[2205]: I0307 00:53:36.962021 2205 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:53:36.962432 kubelet[2205]: E0307 00:53:36.962106 2205 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:53:36.964349 kubelet[2205]: E0307 00:53:36.963275 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://116.202.31.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:53:36.970158 kubelet[2205]: E0307 00:53:36.969387 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://116.202.31.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:53:36.986913 kubelet[2205]: I0307 00:53:36.986879 2205 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:53:36.986913 kubelet[2205]: I0307 00:53:36.986904 2205 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:53:36.987116 kubelet[2205]: I0307 00:53:36.986930 2205 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:53:36.993468 kubelet[2205]: I0307 00:53:36.993406 2205 policy_none.go:49] "None policy: Start" Mar 7 00:53:36.993468 kubelet[2205]: I0307 00:53:36.993446 2205 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:53:36.993468 kubelet[2205]: I0307 00:53:36.993462 2205 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:53:37.007178 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 7 00:53:37.017412 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 7 00:53:37.025001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 7 00:53:37.034728 kubelet[2205]: E0307 00:53:37.034635 2205 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:53:37.035031 kubelet[2205]: I0307 00:53:37.034978 2205 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:53:37.035141 kubelet[2205]: I0307 00:53:37.035012 2205 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:53:37.037176 kubelet[2205]: I0307 00:53:37.036956 2205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:53:37.040091 kubelet[2205]: E0307 00:53:37.040027 2205 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:53:37.040224 kubelet[2205]: E0307 00:53:37.040102 2205 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-2a659a64a8\" not found" Mar 7 00:53:37.078970 systemd[1]: Created slice kubepods-burstable-podffa1bb3ecffc39f583eb0d17d1029745.slice - libcontainer container kubepods-burstable-podffa1bb3ecffc39f583eb0d17d1029745.slice. Mar 7 00:53:37.094091 kubelet[2205]: E0307 00:53:37.093987 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.097744 systemd[1]: Created slice kubepods-burstable-podeb3f96c4c80d5ad85a876dfb440f5dcd.slice - libcontainer container kubepods-burstable-podeb3f96c4c80d5ad85a876dfb440f5dcd.slice. Mar 7 00:53:37.109084 kubelet[2205]: E0307 00:53:37.107253 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://116.202.31.117:6443/api/v1/namespaces/default/events\": dial tcp 116.202.31.117:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-2a659a64a8.189a68f7d2b419b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-2a659a64a8,UID:ci-4081-3-6-n-2a659a64a8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-2a659a64a8,},FirstTimestamp:2026-03-07 00:53:36.924940729 +0000 UTC m=+0.441095631,LastTimestamp:2026-03-07 00:53:36.924940729 +0000 UTC m=+0.441095631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-2a659a64a8,}" Mar 7 00:53:37.111309 kubelet[2205]: E0307 00:53:37.111248 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.114974 systemd[1]: Created slice kubepods-burstable-pod5b5cb8e8c141736cc5839070cd60db01.slice - libcontainer container kubepods-burstable-pod5b5cb8e8c141736cc5839070cd60db01.slice. Mar 7 00:53:37.118649 kubelet[2205]: E0307 00:53:37.118580 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.139131 kubelet[2205]: I0307 00:53:37.139077 2205 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.139800 kubelet[2205]: E0307 00:53:37.139751 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://116.202.31.117:6443/api/v1/nodes\": dial tcp 116.202.31.117:6443: connect: connection refused" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.149955 kubelet[2205]: I0307 00:53:37.149917 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffa1bb3ecffc39f583eb0d17d1029745-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" (UID: \"ffa1bb3ecffc39f583eb0d17d1029745\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150209 kubelet[2205]: E0307 00:53:37.149946 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.31.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-2a659a64a8?timeout=10s\": dial tcp 116.202.31.117:6443: connect: connection refused" interval="400ms" Mar 7 00:53:37.150209 kubelet[2205]: I0307 00:53:37.150164 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffa1bb3ecffc39f583eb0d17d1029745-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" (UID: \"ffa1bb3ecffc39f583eb0d17d1029745\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150409 kubelet[2205]: I0307 00:53:37.150228 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffa1bb3ecffc39f583eb0d17d1029745-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" (UID: \"ffa1bb3ecffc39f583eb0d17d1029745\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150409 kubelet[2205]: I0307 00:53:37.150276 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150409 kubelet[2205]: I0307 00:53:37.150309 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b5cb8e8c141736cc5839070cd60db01-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-2a659a64a8\" (UID: \"5b5cb8e8c141736cc5839070cd60db01\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150409 kubelet[2205]: I0307 00:53:37.150345 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150409 kubelet[2205]: I0307 00:53:37.150372 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150593 kubelet[2205]: I0307 00:53:37.150394 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.150593 kubelet[2205]: I0307 00:53:37.150418 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.342911 kubelet[2205]: I0307 00:53:37.342497 2205 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.343272 kubelet[2205]: E0307 00:53:37.343203 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://116.202.31.117:6443/api/v1/nodes\": dial tcp 116.202.31.117:6443: connect: connection refused" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.396995 containerd[1473]: time="2026-03-07T00:53:37.396458583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-2a659a64a8,Uid:ffa1bb3ecffc39f583eb0d17d1029745,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:37.414083 containerd[1473]: time="2026-03-07T00:53:37.413698280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-2a659a64a8,Uid:eb3f96c4c80d5ad85a876dfb440f5dcd,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:37.420203 containerd[1473]: time="2026-03-07T00:53:37.419909312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-2a659a64a8,Uid:5b5cb8e8c141736cc5839070cd60db01,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:37.551301 kubelet[2205]: E0307 00:53:37.551239 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.31.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-2a659a64a8?timeout=10s\": dial tcp 116.202.31.117:6443: connect: connection refused" interval="800ms" Mar 7 00:53:37.746421 kubelet[2205]: I0307 00:53:37.745720 2205 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.746421 kubelet[2205]: E0307 00:53:37.746164 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://116.202.31.117:6443/api/v1/nodes\": dial tcp 116.202.31.117:6443: connect: connection refused" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:37.787512 kubelet[2205]: E0307 00:53:37.787449 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://116.202.31.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:53:37.882893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060361267.mount: Deactivated successfully. Mar 7 00:53:37.891799 containerd[1473]: time="2026-03-07T00:53:37.891642210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:37.898790 containerd[1473]: time="2026-03-07T00:53:37.898594921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Mar 7 00:53:37.900727 containerd[1473]: time="2026-03-07T00:53:37.899446760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:37.902081 containerd[1473]: time="2026-03-07T00:53:37.901717117Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:37.903792 containerd[1473]: time="2026-03-07T00:53:37.903436155Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:37.905180 containerd[1473]: time="2026-03-07T00:53:37.905140152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:53:37.907793 containerd[1473]: time="2026-03-07T00:53:37.907739629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:53:37.912372 containerd[1473]: time="2026-03-07T00:53:37.912233623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:53:37.913169 containerd[1473]: time="2026-03-07T00:53:37.913118742Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.10479ms" Mar 7 00:53:37.916955 containerd[1473]: time="2026-03-07T00:53:37.915029499Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 518.441396ms" Mar 7 00:53:37.917389 containerd[1473]: time="2026-03-07T00:53:37.917211777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.407937ms" Mar 7 00:53:38.067605 containerd[1473]: time="2026-03-07T00:53:38.066739463Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:38.067605 containerd[1473]: time="2026-03-07T00:53:38.066905943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:38.067605 containerd[1473]: time="2026-03-07T00:53:38.066922583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:38.067605 containerd[1473]: time="2026-03-07T00:53:38.067035303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:38.070399 containerd[1473]: time="2026-03-07T00:53:38.069920259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:38.070399 containerd[1473]: time="2026-03-07T00:53:38.069982779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:38.070399 containerd[1473]: time="2026-03-07T00:53:38.069997539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:38.070399 containerd[1473]: time="2026-03-07T00:53:38.070101299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:38.076844 containerd[1473]: time="2026-03-07T00:53:38.076540051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:38.076844 containerd[1473]: time="2026-03-07T00:53:38.076638571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:38.076844 containerd[1473]: time="2026-03-07T00:53:38.076654211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:38.079922 containerd[1473]: time="2026-03-07T00:53:38.079691807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:38.098682 systemd[1]: Started cri-containerd-0fc383543235f42d4c7fe9e26fff7f527c7e5bfa5141d32128f20377e22e8168.scope - libcontainer container 0fc383543235f42d4c7fe9e26fff7f527c7e5bfa5141d32128f20377e22e8168. Mar 7 00:53:38.119465 systemd[1]: Started cri-containerd-386042f5144283af7c741837721cdd2319e4be35be674d046bb4c6052252d3cc.scope - libcontainer container 386042f5144283af7c741837721cdd2319e4be35be674d046bb4c6052252d3cc. Mar 7 00:53:38.127510 systemd[1]: Started cri-containerd-53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e.scope - libcontainer container 53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e. Mar 7 00:53:38.167527 containerd[1473]: time="2026-03-07T00:53:38.167345777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-2a659a64a8,Uid:ffa1bb3ecffc39f583eb0d17d1029745,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fc383543235f42d4c7fe9e26fff7f527c7e5bfa5141d32128f20377e22e8168\"" Mar 7 00:53:38.179947 containerd[1473]: time="2026-03-07T00:53:38.179899921Z" level=info msg="CreateContainer within sandbox \"0fc383543235f42d4c7fe9e26fff7f527c7e5bfa5141d32128f20377e22e8168\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:53:38.188219 kubelet[2205]: E0307 00:53:38.188036 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://116.202.31.117:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:53:38.192135 containerd[1473]: time="2026-03-07T00:53:38.192082945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-2a659a64a8,Uid:5b5cb8e8c141736cc5839070cd60db01,Namespace:kube-system,Attempt:0,} returns sandbox id \"386042f5144283af7c741837721cdd2319e4be35be674d046bb4c6052252d3cc\"" Mar 7 00:53:38.199281 containerd[1473]: time="2026-03-07T00:53:38.199095977Z" level=info msg="CreateContainer within sandbox \"386042f5144283af7c741837721cdd2319e4be35be674d046bb4c6052252d3cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:53:38.201767 containerd[1473]: time="2026-03-07T00:53:38.201711813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-2a659a64a8,Uid:eb3f96c4c80d5ad85a876dfb440f5dcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e\"" Mar 7 00:53:38.206810 containerd[1473]: time="2026-03-07T00:53:38.206760527Z" level=info msg="CreateContainer within sandbox \"0fc383543235f42d4c7fe9e26fff7f527c7e5bfa5141d32128f20377e22e8168\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58e18d03ac8d0fd1911534ecedf19810c36f54a96b982070cb4ce5420f3f4592\"" Mar 7 00:53:38.207383 containerd[1473]: time="2026-03-07T00:53:38.207090927Z" level=info msg="CreateContainer within sandbox \"53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:53:38.208529 containerd[1473]: time="2026-03-07T00:53:38.208498605Z" level=info msg="StartContainer for \"58e18d03ac8d0fd1911534ecedf19810c36f54a96b982070cb4ce5420f3f4592\"" Mar 7 00:53:38.239361 systemd[1]: Started cri-containerd-58e18d03ac8d0fd1911534ecedf19810c36f54a96b982070cb4ce5420f3f4592.scope - libcontainer container 58e18d03ac8d0fd1911534ecedf19810c36f54a96b982070cb4ce5420f3f4592. Mar 7 00:53:38.263316 kubelet[2205]: E0307 00:53:38.263192 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://116.202.31.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-2a659a64a8&limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:53:38.270536 containerd[1473]: time="2026-03-07T00:53:38.270367087Z" level=info msg="CreateContainer within sandbox \"53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f\"" Mar 7 00:53:38.272182 containerd[1473]: time="2026-03-07T00:53:38.271113046Z" level=info msg="StartContainer for \"6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f\"" Mar 7 00:53:38.280610 containerd[1473]: time="2026-03-07T00:53:38.280548234Z" level=info msg="CreateContainer within sandbox \"386042f5144283af7c741837721cdd2319e4be35be674d046bb4c6052252d3cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d75b72dc16274f8e12bccb88c5531a4ae5b150f398d252c07d1fe6741f5d259\"" Mar 7 00:53:38.284641 containerd[1473]: time="2026-03-07T00:53:38.282755311Z" level=info msg="StartContainer for \"6d75b72dc16274f8e12bccb88c5531a4ae5b150f398d252c07d1fe6741f5d259\"" Mar 7 00:53:38.324396 containerd[1473]: time="2026-03-07T00:53:38.324228739Z" level=info msg="StartContainer for \"58e18d03ac8d0fd1911534ecedf19810c36f54a96b982070cb4ce5420f3f4592\" returns successfully" Mar 7 00:53:38.338348 systemd[1]: Started cri-containerd-6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f.scope - libcontainer container 6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f. Mar 7 00:53:38.353309 kubelet[2205]: E0307 00:53:38.353269 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://116.202.31.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-2a659a64a8?timeout=10s\": dial tcp 116.202.31.117:6443: connect: connection refused" interval="1.6s" Mar 7 00:53:38.360210 systemd[1]: Started cri-containerd-6d75b72dc16274f8e12bccb88c5531a4ae5b150f398d252c07d1fe6741f5d259.scope - libcontainer container 6d75b72dc16274f8e12bccb88c5531a4ae5b150f398d252c07d1fe6741f5d259. Mar 7 00:53:38.363143 kubelet[2205]: E0307 00:53:38.363090 2205 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://116.202.31.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 116.202.31.117:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:53:38.411479 containerd[1473]: time="2026-03-07T00:53:38.411345069Z" level=info msg="StartContainer for \"6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f\" returns successfully" Mar 7 00:53:38.437333 containerd[1473]: time="2026-03-07T00:53:38.437134277Z" level=info msg="StartContainer for \"6d75b72dc16274f8e12bccb88c5531a4ae5b150f398d252c07d1fe6741f5d259\" returns successfully" Mar 7 00:53:38.549646 kubelet[2205]: I0307 00:53:38.549608 2205 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:38.985301 kubelet[2205]: E0307 00:53:38.984758 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:38.988606 kubelet[2205]: E0307 00:53:38.988141 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:38.992704 kubelet[2205]: E0307 00:53:38.992468 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:39.997069 kubelet[2205]: E0307 00:53:39.994461 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:39.997069 kubelet[2205]: E0307 00:53:39.994849 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:40.152514 kubelet[2205]: E0307 00:53:40.152477 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:40.996409 kubelet[2205]: E0307 00:53:40.996362 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:40.999003 kubelet[2205]: E0307 00:53:40.998961 2205 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.236549 kubelet[2205]: E0307 00:53:41.236504 2205 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-2a659a64a8\" not found" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.353843 kubelet[2205]: I0307 00:53:41.353793 2205 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.353843 kubelet[2205]: E0307 00:53:41.353843 2205 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-2a659a64a8\": node \"ci-4081-3-6-n-2a659a64a8\" not found" Mar 7 00:53:41.442612 kubelet[2205]: I0307 00:53:41.441956 2205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.452329 kubelet[2205]: E0307 00:53:41.451999 2205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.452329 kubelet[2205]: I0307 00:53:41.452072 2205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.456989 kubelet[2205]: E0307 00:53:41.456761 2205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.456989 kubelet[2205]: I0307 00:53:41.456801 2205 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.458941 kubelet[2205]: E0307 00:53:41.458883 2205 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-2a659a64a8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:41.930374 kubelet[2205]: I0307 00:53:41.930082 2205 apiserver.go:52] "Watching apiserver" Mar 7 00:53:41.949411 kubelet[2205]: I0307 00:53:41.949318 2205 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:53:43.434464 systemd[1]: Reloading requested from client PID 2487 ('systemctl') (unit session-7.scope)... Mar 7 00:53:43.434484 systemd[1]: Reloading... Mar 7 00:53:43.542179 zram_generator::config[2528]: No configuration found. Mar 7 00:53:43.666325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:53:43.761792 systemd[1]: Reloading finished in 326 ms. Mar 7 00:53:43.807625 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:43.823645 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:53:43.825056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:43.832438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:53:43.970300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:53:43.985652 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:53:44.060279 kubelet[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:53:44.060279 kubelet[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:53:44.060279 kubelet[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:53:44.060630 kubelet[2572]: I0307 00:53:44.060343 2572 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:53:44.070089 kubelet[2572]: I0307 00:53:44.068885 2572 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:53:44.070089 kubelet[2572]: I0307 00:53:44.069002 2572 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:53:44.070089 kubelet[2572]: I0307 00:53:44.069644 2572 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:53:44.072747 kubelet[2572]: I0307 00:53:44.072717 2572 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:53:44.075909 kubelet[2572]: I0307 00:53:44.075872 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:53:44.081192 kubelet[2572]: E0307 00:53:44.081109 2572 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:53:44.081389 kubelet[2572]: I0307 00:53:44.081204 2572 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:53:44.083796 kubelet[2572]: I0307 00:53:44.083766 2572 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:53:44.083996 kubelet[2572]: I0307 00:53:44.083967 2572 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:53:44.084168 kubelet[2572]: I0307 00:53:44.083996 2572 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-2a659a64a8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 7 00:53:44.084270 kubelet[2572]: I0307 00:53:44.084176 2572 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:53:44.084270 kubelet[2572]: I0307 00:53:44.084186 2572 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:53:44.084317 kubelet[2572]: I0307 00:53:44.084313 2572 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:53:44.084488 kubelet[2572]: I0307 00:53:44.084478 2572 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:53:44.084520 kubelet[2572]: I0307 00:53:44.084495 2572 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:53:44.084556 kubelet[2572]: I0307 00:53:44.084528 2572 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:53:44.084556 kubelet[2572]: I0307 00:53:44.084545 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:53:44.086738 kubelet[2572]: I0307 00:53:44.086667 2572 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:53:44.087365 kubelet[2572]: I0307 00:53:44.087318 2572 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:53:44.093095 kubelet[2572]: I0307 00:53:44.091743 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:53:44.093095 kubelet[2572]: I0307 00:53:44.091792 2572 server.go:1289] "Started kubelet" Mar 7 00:53:44.095829 kubelet[2572]: I0307 00:53:44.095773 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:53:44.096230 kubelet[2572]: I0307 00:53:44.096201 2572 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:53:44.102942 kubelet[2572]: I0307 00:53:44.102701 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:53:44.121584 kubelet[2572]: I0307 00:53:44.121547 2572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:53:44.132019 kubelet[2572]: I0307 00:53:44.121962 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:53:44.137149 kubelet[2572]: I0307 00:53:44.137098 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:53:44.138407 kubelet[2572]: I0307 00:53:44.138376 2572 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:53:44.138936 kubelet[2572]: I0307 00:53:44.138566 2572 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:53:44.138936 kubelet[2572]: I0307 00:53:44.138606 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:53:44.138936 kubelet[2572]: I0307 00:53:44.138617 2572 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:53:44.138936 kubelet[2572]: E0307 00:53:44.138669 2572 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:53:44.138936 kubelet[2572]: E0307 00:53:44.123023 2572 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-2a659a64a8\" not found" Mar 7 00:53:44.138936 kubelet[2572]: I0307 00:53:44.121974 2572 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:53:44.139239 kubelet[2572]: I0307 00:53:44.139019 2572 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:53:44.142515 kubelet[2572]: I0307 00:53:44.128406 2572 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:53:44.144760 kubelet[2572]: I0307 00:53:44.144595 2572 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:53:44.144760 kubelet[2572]: I0307 00:53:44.144718 2572 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:53:44.146357 kubelet[2572]: I0307 00:53:44.129289 2572 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:53:44.156160 kubelet[2572]: I0307 00:53:44.156112 2572 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:53:44.161709 kubelet[2572]: E0307 00:53:44.160579 2572 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:53:44.221691 kubelet[2572]: I0307 00:53:44.221659 2572 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:53:44.221691 kubelet[2572]: I0307 00:53:44.221681 2572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:53:44.221691 kubelet[2572]: I0307 00:53:44.221707 2572 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:53:44.221886 kubelet[2572]: I0307 00:53:44.221860 2572 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:53:44.221886 kubelet[2572]: I0307 00:53:44.221871 2572 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:53:44.221938 kubelet[2572]: I0307 00:53:44.221889 2572 policy_none.go:49] "None policy: Start" Mar 7 00:53:44.221938 kubelet[2572]: I0307 00:53:44.221898 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:53:44.221938 kubelet[2572]: I0307 00:53:44.221906 2572 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:53:44.222016 kubelet[2572]: I0307 00:53:44.221995 2572 state_mem.go:75] "Updated machine memory state" Mar 7 00:53:44.226761 kubelet[2572]: E0307 00:53:44.226708 2572 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:53:44.226948 kubelet[2572]: I0307 00:53:44.226926 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:53:44.226999 kubelet[2572]: I0307 00:53:44.226946 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:53:44.227679 kubelet[2572]: I0307 00:53:44.227633 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:53:44.230375 kubelet[2572]: E0307 00:53:44.230331 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:53:44.240375 kubelet[2572]: I0307 00:53:44.239724 2572 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.240375 kubelet[2572]: I0307 00:53:44.239895 2572 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.242859 kubelet[2572]: I0307 00:53:44.242654 2572 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.331668 kubelet[2572]: I0307 00:53:44.331462 2572 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.339554 kubelet[2572]: I0307 00:53:44.339456 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.339554 kubelet[2572]: I0307 00:53:44.339501 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.339554 kubelet[2572]: I0307 00:53:44.339517 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.350602 kubelet[2572]: I0307 00:53:44.350307 2572 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.351541 kubelet[2572]: I0307 00:53:44.351011 2572 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.440144 kubelet[2572]: I0307 00:53:44.440040 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.440437 kubelet[2572]: I0307 00:53:44.440388 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb3f96c4c80d5ad85a876dfb440f5dcd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" (UID: \"eb3f96c4c80d5ad85a876dfb440f5dcd\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.440620 kubelet[2572]: I0307 00:53:44.440417 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffa1bb3ecffc39f583eb0d17d1029745-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" (UID: \"ffa1bb3ecffc39f583eb0d17d1029745\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.440620 kubelet[2572]: I0307 00:53:44.440555 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffa1bb3ecffc39f583eb0d17d1029745-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" (UID: \"ffa1bb3ecffc39f583eb0d17d1029745\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.440899 kubelet[2572]: I0307 00:53:44.440755 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b5cb8e8c141736cc5839070cd60db01-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-2a659a64a8\" (UID: \"5b5cb8e8c141736cc5839070cd60db01\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:44.440899 kubelet[2572]: I0307 00:53:44.440784 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffa1bb3ecffc39f583eb0d17d1029745-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" (UID: \"ffa1bb3ecffc39f583eb0d17d1029745\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.086596 kubelet[2572]: I0307 00:53:45.086286 2572 apiserver.go:52] "Watching apiserver" Mar 7 00:53:45.139091 kubelet[2572]: I0307 00:53:45.138993 2572 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:53:45.191794 kubelet[2572]: I0307 00:53:45.191433 2572 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.192907 kubelet[2572]: I0307 00:53:45.192642 2572 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.195079 kubelet[2572]: I0307 00:53:45.194275 2572 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.205499 kubelet[2572]: E0307 00:53:45.204906 2572 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-2a659a64a8\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.214938 kubelet[2572]: E0307 00:53:45.214894 2572 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-2a659a64a8\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.215647 kubelet[2572]: E0307 00:53:45.215302 2572 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-2a659a64a8\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" Mar 7 00:53:45.251376 kubelet[2572]: I0307 00:53:45.251278 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-2a659a64a8" podStartSLOduration=1.251258781 podStartE2EDuration="1.251258781s" podCreationTimestamp="2026-03-07 00:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:45.237096115 +0000 UTC m=+1.243635796" watchObservedRunningTime="2026-03-07 00:53:45.251258781 +0000 UTC m=+1.257798422" Mar 7 00:53:45.264068 kubelet[2572]: I0307 00:53:45.263802 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-2a659a64a8" podStartSLOduration=1.263782129 podStartE2EDuration="1.263782129s" podCreationTimestamp="2026-03-07 00:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:45.251542941 +0000 UTC m=+1.258082582" watchObservedRunningTime="2026-03-07 00:53:45.263782129 +0000 UTC m=+1.270321770" Mar 7 00:53:49.083113 kubelet[2572]: I0307 00:53:49.083032 2572 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:53:49.083950 containerd[1473]: time="2026-03-07T00:53:49.083488478Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:53:49.085166 kubelet[2572]: I0307 00:53:49.084400 2572 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:53:50.051817 kubelet[2572]: I0307 00:53:50.051713 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-2a659a64a8" podStartSLOduration=6.051685935 podStartE2EDuration="6.051685935s" podCreationTimestamp="2026-03-07 00:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:45.264108809 +0000 UTC m=+1.270648450" watchObservedRunningTime="2026-03-07 00:53:50.051685935 +0000 UTC m=+6.058225576" Mar 7 00:53:50.068299 systemd[1]: Created slice kubepods-besteffort-podccb86a96_96e7_458e_8851_3c1c42e74d78.slice - libcontainer container kubepods-besteffort-podccb86a96_96e7_458e_8851_3c1c42e74d78.slice. Mar 7 00:53:50.078531 kubelet[2572]: I0307 00:53:50.078347 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw9c2\" (UniqueName: \"kubernetes.io/projected/ccb86a96-96e7-458e-8851-3c1c42e74d78-kube-api-access-rw9c2\") pod \"kube-proxy-5hsjw\" (UID: \"ccb86a96-96e7-458e-8851-3c1c42e74d78\") " pod="kube-system/kube-proxy-5hsjw" Mar 7 00:53:50.078531 kubelet[2572]: I0307 00:53:50.078401 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccb86a96-96e7-458e-8851-3c1c42e74d78-xtables-lock\") pod \"kube-proxy-5hsjw\" (UID: \"ccb86a96-96e7-458e-8851-3c1c42e74d78\") " pod="kube-system/kube-proxy-5hsjw" Mar 7 00:53:50.078531 kubelet[2572]: I0307 00:53:50.078421 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccb86a96-96e7-458e-8851-3c1c42e74d78-lib-modules\") pod \"kube-proxy-5hsjw\" (UID: \"ccb86a96-96e7-458e-8851-3c1c42e74d78\") " pod="kube-system/kube-proxy-5hsjw" Mar 7 00:53:50.078531 kubelet[2572]: I0307 00:53:50.078443 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ccb86a96-96e7-458e-8851-3c1c42e74d78-kube-proxy\") pod \"kube-proxy-5hsjw\" (UID: \"ccb86a96-96e7-458e-8851-3c1c42e74d78\") " pod="kube-system/kube-proxy-5hsjw" Mar 7 00:53:50.193995 kubelet[2572]: E0307 00:53:50.193913 2572 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 7 00:53:50.193995 kubelet[2572]: E0307 00:53:50.193966 2572 projected.go:194] Error preparing data for projected volume kube-api-access-rw9c2 for pod kube-system/kube-proxy-5hsjw: configmap "kube-root-ca.crt" not found Mar 7 00:53:50.194443 kubelet[2572]: E0307 00:53:50.194098 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ccb86a96-96e7-458e-8851-3c1c42e74d78-kube-api-access-rw9c2 podName:ccb86a96-96e7-458e-8851-3c1c42e74d78 nodeName:}" failed. No retries permitted until 2026-03-07 00:53:50.694018623 +0000 UTC m=+6.700558264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rw9c2" (UniqueName: "kubernetes.io/projected/ccb86a96-96e7-458e-8851-3c1c42e74d78-kube-api-access-rw9c2") pod "kube-proxy-5hsjw" (UID: "ccb86a96-96e7-458e-8851-3c1c42e74d78") : configmap "kube-root-ca.crt" not found Mar 7 00:53:50.303232 systemd[1]: Created slice kubepods-besteffort-poda841ddf2_019a_471b_a9b0_d893fac14d35.slice - libcontainer container kubepods-besteffort-poda841ddf2_019a_471b_a9b0_d893fac14d35.slice. Mar 7 00:53:50.381298 kubelet[2572]: I0307 00:53:50.381040 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a841ddf2-019a-471b-a9b0-d893fac14d35-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-d7fg7\" (UID: \"a841ddf2-019a-471b-a9b0-d893fac14d35\") " pod="tigera-operator/tigera-operator-6bf85f8dd-d7fg7" Mar 7 00:53:50.381298 kubelet[2572]: I0307 00:53:50.381215 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wzg\" (UniqueName: \"kubernetes.io/projected/a841ddf2-019a-471b-a9b0-d893fac14d35-kube-api-access-h6wzg\") pod \"tigera-operator-6bf85f8dd-d7fg7\" (UID: \"a841ddf2-019a-471b-a9b0-d893fac14d35\") " pod="tigera-operator/tigera-operator-6bf85f8dd-d7fg7" Mar 7 00:53:50.609902 containerd[1473]: time="2026-03-07T00:53:50.609481938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-d7fg7,Uid:a841ddf2-019a-471b-a9b0-d893fac14d35,Namespace:tigera-operator,Attempt:0,}" Mar 7 00:53:50.638493 containerd[1473]: time="2026-03-07T00:53:50.638196155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:50.638493 containerd[1473]: time="2026-03-07T00:53:50.638317755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:50.638493 containerd[1473]: time="2026-03-07T00:53:50.638336115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:50.638664 containerd[1473]: time="2026-03-07T00:53:50.638441555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:50.666434 systemd[1]: Started cri-containerd-b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b.scope - libcontainer container b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b. Mar 7 00:53:50.704001 containerd[1473]: time="2026-03-07T00:53:50.703954624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-d7fg7,Uid:a841ddf2-019a-471b-a9b0-d893fac14d35,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b\"" Mar 7 00:53:50.708438 containerd[1473]: time="2026-03-07T00:53:50.708378460Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 00:53:50.982147 containerd[1473]: time="2026-03-07T00:53:50.981931806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hsjw,Uid:ccb86a96-96e7-458e-8851-3c1c42e74d78,Namespace:kube-system,Attempt:0,}" Mar 7 00:53:51.009940 containerd[1473]: time="2026-03-07T00:53:51.009586184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:53:51.009940 containerd[1473]: time="2026-03-07T00:53:51.009811264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:53:51.009940 containerd[1473]: time="2026-03-07T00:53:51.009884784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:51.011089 containerd[1473]: time="2026-03-07T00:53:51.010654104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:53:51.033372 systemd[1]: Started cri-containerd-20c21e3166d5e42b8807f84a92beb305d8fc3b027c0404887ae89f9c5766c3bb.scope - libcontainer container 20c21e3166d5e42b8807f84a92beb305d8fc3b027c0404887ae89f9c5766c3bb. Mar 7 00:53:51.061008 containerd[1473]: time="2026-03-07T00:53:51.060687906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5hsjw,Uid:ccb86a96-96e7-458e-8851-3c1c42e74d78,Namespace:kube-system,Attempt:0,} returns sandbox id \"20c21e3166d5e42b8807f84a92beb305d8fc3b027c0404887ae89f9c5766c3bb\"" Mar 7 00:53:51.069727 containerd[1473]: time="2026-03-07T00:53:51.069312219Z" level=info msg="CreateContainer within sandbox \"20c21e3166d5e42b8807f84a92beb305d8fc3b027c0404887ae89f9c5766c3bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:53:51.091092 containerd[1473]: time="2026-03-07T00:53:51.090814003Z" level=info msg="CreateContainer within sandbox \"20c21e3166d5e42b8807f84a92beb305d8fc3b027c0404887ae89f9c5766c3bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d75a1909c9859a8bef7e0b7d02b6365da4e08a364715ac64df7ac2fb1b7ae44\"" Mar 7 00:53:51.093219 containerd[1473]: time="2026-03-07T00:53:51.091930722Z" level=info msg="StartContainer for \"6d75a1909c9859a8bef7e0b7d02b6365da4e08a364715ac64df7ac2fb1b7ae44\"" Mar 7 00:53:51.125334 systemd[1]: Started cri-containerd-6d75a1909c9859a8bef7e0b7d02b6365da4e08a364715ac64df7ac2fb1b7ae44.scope - libcontainer container 6d75a1909c9859a8bef7e0b7d02b6365da4e08a364715ac64df7ac2fb1b7ae44. Mar 7 00:53:51.173002 containerd[1473]: time="2026-03-07T00:53:51.172327661Z" level=info msg="StartContainer for \"6d75a1909c9859a8bef7e0b7d02b6365da4e08a364715ac64df7ac2fb1b7ae44\" returns successfully" Mar 7 00:53:51.224573 kubelet[2572]: I0307 00:53:51.224160 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5hsjw" podStartSLOduration=1.224141702 podStartE2EDuration="1.224141702s" podCreationTimestamp="2026-03-07 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:53:51.223245182 +0000 UTC m=+7.229784823" watchObservedRunningTime="2026-03-07 00:53:51.224141702 +0000 UTC m=+7.230681343" Mar 7 00:53:52.418733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount42134869.mount: Deactivated successfully. Mar 7 00:53:52.802277 containerd[1473]: time="2026-03-07T00:53:52.801146206Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:52.802941 containerd[1473]: time="2026-03-07T00:53:52.802837004Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 00:53:52.803790 containerd[1473]: time="2026-03-07T00:53:52.803754484Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:52.807727 containerd[1473]: time="2026-03-07T00:53:52.807470681Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:53:52.809489 containerd[1473]: time="2026-03-07T00:53:52.809425200Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.1009851s" Mar 7 00:53:52.809638 containerd[1473]: time="2026-03-07T00:53:52.809498960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 00:53:52.816335 containerd[1473]: time="2026-03-07T00:53:52.816154555Z" level=info msg="CreateContainer within sandbox \"b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 00:53:52.829707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133198045.mount: Deactivated successfully. Mar 7 00:53:52.832129 containerd[1473]: time="2026-03-07T00:53:52.832086143Z" level=info msg="CreateContainer within sandbox \"b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636\"" Mar 7 00:53:52.833907 containerd[1473]: time="2026-03-07T00:53:52.832990502Z" level=info msg="StartContainer for \"0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636\"" Mar 7 00:53:52.872474 systemd[1]: Started cri-containerd-0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636.scope - libcontainer container 0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636. Mar 7 00:53:52.904557 containerd[1473]: time="2026-03-07T00:53:52.904514450Z" level=info msg="StartContainer for \"0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636\" returns successfully" Mar 7 00:53:54.417531 kubelet[2572]: I0307 00:53:54.417462 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-d7fg7" podStartSLOduration=2.312631803 podStartE2EDuration="4.41743334s" podCreationTimestamp="2026-03-07 00:53:50 +0000 UTC" firstStartedPulling="2026-03-07 00:53:50.705765942 +0000 UTC m=+6.712305623" lastFinishedPulling="2026-03-07 00:53:52.810567479 +0000 UTC m=+8.817107160" observedRunningTime="2026-03-07 00:53:53.23775357 +0000 UTC m=+9.244293251" watchObservedRunningTime="2026-03-07 00:53:54.41743334 +0000 UTC m=+10.423972981" Mar 7 00:53:59.203974 sudo[1710]: pam_unix(sudo:session): session closed for user root Mar 7 00:53:59.298931 sshd[1707]: pam_unix(sshd:session): session closed for user core Mar 7 00:53:59.304446 systemd[1]: sshd@6-116.202.31.117:22-20.161.92.111:36448.service: Deactivated successfully. Mar 7 00:53:59.308227 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:53:59.308437 systemd[1]: session-7.scope: Consumed 6.617s CPU time, 154.2M memory peak, 0B memory swap peak. Mar 7 00:53:59.311926 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:53:59.313727 systemd-logind[1455]: Removed session 7. Mar 7 00:54:06.366537 systemd[1]: Created slice kubepods-besteffort-pod41cc6505_a39a_4134_8d9e_93a537122e4c.slice - libcontainer container kubepods-besteffort-pod41cc6505_a39a_4134_8d9e_93a537122e4c.slice. Mar 7 00:54:06.389691 kubelet[2572]: I0307 00:54:06.389639 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf4ch\" (UniqueName: \"kubernetes.io/projected/41cc6505-a39a-4134-8d9e-93a537122e4c-kube-api-access-vf4ch\") pod \"calico-typha-658979cb4f-pgppp\" (UID: \"41cc6505-a39a-4134-8d9e-93a537122e4c\") " pod="calico-system/calico-typha-658979cb4f-pgppp" Mar 7 00:54:06.389691 kubelet[2572]: I0307 00:54:06.389693 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41cc6505-a39a-4134-8d9e-93a537122e4c-tigera-ca-bundle\") pod \"calico-typha-658979cb4f-pgppp\" (UID: \"41cc6505-a39a-4134-8d9e-93a537122e4c\") " pod="calico-system/calico-typha-658979cb4f-pgppp" Mar 7 00:54:06.390221 kubelet[2572]: I0307 00:54:06.389711 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41cc6505-a39a-4134-8d9e-93a537122e4c-typha-certs\") pod \"calico-typha-658979cb4f-pgppp\" (UID: \"41cc6505-a39a-4134-8d9e-93a537122e4c\") " pod="calico-system/calico-typha-658979cb4f-pgppp" Mar 7 00:54:06.507323 systemd[1]: Created slice kubepods-besteffort-pod2e578d86_21c9_4898_b829_79039078a84d.slice - libcontainer container kubepods-besteffort-pod2e578d86_21c9_4898_b829_79039078a84d.slice. Mar 7 00:54:06.591557 kubelet[2572]: I0307 00:54:06.590548 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-cni-log-dir\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591557 kubelet[2572]: I0307 00:54:06.590594 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-lib-modules\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591557 kubelet[2572]: I0307 00:54:06.590617 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-nodeproc\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591557 kubelet[2572]: I0307 00:54:06.590633 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-policysync\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591557 kubelet[2572]: I0307 00:54:06.590653 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2e578d86-21c9-4898-b829-79039078a84d-node-certs\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591812 kubelet[2572]: I0307 00:54:06.590669 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-var-lib-calico\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591812 kubelet[2572]: I0307 00:54:06.590689 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-xtables-lock\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591812 kubelet[2572]: I0307 00:54:06.590708 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tffkz\" (UniqueName: \"kubernetes.io/projected/2e578d86-21c9-4898-b829-79039078a84d-kube-api-access-tffkz\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591812 kubelet[2572]: I0307 00:54:06.590760 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-bpffs\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591812 kubelet[2572]: I0307 00:54:06.590782 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-cni-net-dir\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591929 kubelet[2572]: I0307 00:54:06.590800 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-sys-fs\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591929 kubelet[2572]: I0307 00:54:06.590837 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e578d86-21c9-4898-b829-79039078a84d-tigera-ca-bundle\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591929 kubelet[2572]: I0307 00:54:06.590856 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-cni-bin-dir\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591929 kubelet[2572]: I0307 00:54:06.590872 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-flexvol-driver-host\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.591929 kubelet[2572]: I0307 00:54:06.590889 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2e578d86-21c9-4898-b829-79039078a84d-var-run-calico\") pod \"calico-node-27n9r\" (UID: \"2e578d86-21c9-4898-b829-79039078a84d\") " pod="calico-system/calico-node-27n9r" Mar 7 00:54:06.596189 kubelet[2572]: E0307 00:54:06.595602 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:06.675321 containerd[1473]: time="2026-03-07T00:54:06.675199409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658979cb4f-pgppp,Uid:41cc6505-a39a-4134-8d9e-93a537122e4c,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:06.695830 kubelet[2572]: I0307 00:54:06.692617 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/127a57c4-5bc8-40d4-8acc-8570b2710eaf-kubelet-dir\") pod \"csi-node-driver-8p8nh\" (UID: \"127a57c4-5bc8-40d4-8acc-8570b2710eaf\") " pod="calico-system/csi-node-driver-8p8nh" Mar 7 00:54:06.695830 kubelet[2572]: I0307 00:54:06.692686 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/127a57c4-5bc8-40d4-8acc-8570b2710eaf-registration-dir\") pod \"csi-node-driver-8p8nh\" (UID: \"127a57c4-5bc8-40d4-8acc-8570b2710eaf\") " pod="calico-system/csi-node-driver-8p8nh" Mar 7 00:54:06.695830 kubelet[2572]: I0307 00:54:06.692731 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/127a57c4-5bc8-40d4-8acc-8570b2710eaf-varrun\") pod \"csi-node-driver-8p8nh\" (UID: \"127a57c4-5bc8-40d4-8acc-8570b2710eaf\") " pod="calico-system/csi-node-driver-8p8nh" Mar 7 00:54:06.695830 kubelet[2572]: I0307 00:54:06.692769 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b5wh\" (UniqueName: \"kubernetes.io/projected/127a57c4-5bc8-40d4-8acc-8570b2710eaf-kube-api-access-2b5wh\") pod \"csi-node-driver-8p8nh\" (UID: \"127a57c4-5bc8-40d4-8acc-8570b2710eaf\") " pod="calico-system/csi-node-driver-8p8nh" Mar 7 00:54:06.695830 kubelet[2572]: I0307 00:54:06.692846 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/127a57c4-5bc8-40d4-8acc-8570b2710eaf-socket-dir\") pod \"csi-node-driver-8p8nh\" (UID: \"127a57c4-5bc8-40d4-8acc-8570b2710eaf\") " pod="calico-system/csi-node-driver-8p8nh" Mar 7 00:54:06.700629 kubelet[2572]: E0307 00:54:06.700593 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.701105 kubelet[2572]: W0307 00:54:06.701059 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.701676 kubelet[2572]: E0307 00:54:06.701416 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.705325 kubelet[2572]: E0307 00:54:06.703236 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.705498 kubelet[2572]: W0307 00:54:06.705468 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.705566 kubelet[2572]: E0307 00:54:06.705554 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.705929 kubelet[2572]: E0307 00:54:06.705913 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.706033 kubelet[2572]: W0307 00:54:06.706019 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.706129 kubelet[2572]: E0307 00:54:06.706115 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.707546 kubelet[2572]: E0307 00:54:06.707042 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.712163 kubelet[2572]: W0307 00:54:06.712116 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.712163 kubelet[2572]: E0307 00:54:06.712160 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.715129 kubelet[2572]: E0307 00:54:06.714793 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.715552 kubelet[2572]: W0307 00:54:06.715520 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.716705 kubelet[2572]: E0307 00:54:06.716673 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.717944 kubelet[2572]: E0307 00:54:06.717918 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.718127 kubelet[2572]: W0307 00:54:06.718101 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.720118 kubelet[2572]: E0307 00:54:06.720079 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.720592 kubelet[2572]: E0307 00:54:06.720569 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.720685 kubelet[2572]: W0307 00:54:06.720670 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.720883 kubelet[2572]: E0307 00:54:06.720760 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.721099 kubelet[2572]: E0307 00:54:06.721083 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.721189 kubelet[2572]: W0307 00:54:06.721174 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.721504 kubelet[2572]: E0307 00:54:06.721238 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.722253 kubelet[2572]: E0307 00:54:06.722234 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.724012 kubelet[2572]: W0307 00:54:06.723832 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.724012 kubelet[2572]: E0307 00:54:06.723869 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.725377 kubelet[2572]: E0307 00:54:06.725352 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.725467 kubelet[2572]: W0307 00:54:06.725451 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.725700 kubelet[2572]: E0307 00:54:06.725536 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.728134 kubelet[2572]: E0307 00:54:06.728109 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.728410 kubelet[2572]: W0307 00:54:06.728232 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.728410 kubelet[2572]: E0307 00:54:06.728261 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.728573 kubelet[2572]: E0307 00:54:06.728560 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.728815 kubelet[2572]: W0307 00:54:06.728796 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.730110 kubelet[2572]: E0307 00:54:06.729199 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.731109 kubelet[2572]: E0307 00:54:06.730945 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.732667 kubelet[2572]: W0307 00:54:06.732276 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.732667 kubelet[2572]: E0307 00:54:06.732308 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.733359 kubelet[2572]: E0307 00:54:06.733277 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.733359 kubelet[2572]: W0307 00:54:06.733298 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.733359 kubelet[2572]: E0307 00:54:06.733319 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.733929 kubelet[2572]: E0307 00:54:06.733807 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.733929 kubelet[2572]: W0307 00:54:06.733822 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.733929 kubelet[2572]: E0307 00:54:06.733854 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.734452 kubelet[2572]: E0307 00:54:06.734435 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.734640 containerd[1473]: time="2026-03-07T00:54:06.730618260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:06.734693 kubelet[2572]: W0307 00:54:06.734667 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.735121 kubelet[2572]: E0307 00:54:06.734692 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.735291 kubelet[2572]: E0307 00:54:06.735273 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.735405 kubelet[2572]: W0307 00:54:06.735290 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.735405 kubelet[2572]: E0307 00:54:06.735308 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.736080 kubelet[2572]: E0307 00:54:06.736043 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.736080 kubelet[2572]: W0307 00:54:06.736076 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.736363 kubelet[2572]: E0307 00:54:06.736093 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.736486 containerd[1473]: time="2026-03-07T00:54:06.735502937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:06.736697 kubelet[2572]: E0307 00:54:06.736414 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.736697 kubelet[2572]: W0307 00:54:06.736448 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.736697 kubelet[2572]: E0307 00:54:06.736463 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.737087 kubelet[2572]: E0307 00:54:06.736708 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.737087 kubelet[2572]: W0307 00:54:06.736719 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.737087 kubelet[2572]: E0307 00:54:06.736730 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.737087 kubelet[2572]: E0307 00:54:06.737063 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.737087 kubelet[2572]: W0307 00:54:06.737077 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.737087 kubelet[2572]: E0307 00:54:06.737089 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.737314 kubelet[2572]: E0307 00:54:06.737283 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.737314 kubelet[2572]: W0307 00:54:06.737310 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.737372 kubelet[2572]: E0307 00:54:06.737322 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.737803 kubelet[2572]: E0307 00:54:06.737501 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.737803 kubelet[2572]: W0307 00:54:06.737514 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.737803 kubelet[2572]: E0307 00:54:06.737522 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.738398 containerd[1473]: time="2026-03-07T00:54:06.736799377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:06.738439 kubelet[2572]: E0307 00:54:06.737818 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.738439 kubelet[2572]: W0307 00:54:06.737833 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.738439 kubelet[2572]: E0307 00:54:06.737846 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.738439 kubelet[2572]: E0307 00:54:06.738370 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.738439 kubelet[2572]: W0307 00:54:06.738388 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.738439 kubelet[2572]: E0307 00:54:06.738401 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.739350 containerd[1473]: time="2026-03-07T00:54:06.738114736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:06.740257 kubelet[2572]: E0307 00:54:06.740228 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.740257 kubelet[2572]: W0307 00:54:06.740252 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.740494 kubelet[2572]: E0307 00:54:06.740272 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.741702 kubelet[2572]: E0307 00:54:06.741150 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.741702 kubelet[2572]: W0307 00:54:06.741175 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.742947 kubelet[2572]: E0307 00:54:06.742709 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.743117 kubelet[2572]: E0307 00:54:06.743094 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.743117 kubelet[2572]: W0307 00:54:06.743113 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.743191 kubelet[2572]: E0307 00:54:06.743143 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.772621 systemd[1]: Started cri-containerd-288261d23cef5f1ba070fa6304330ceab89c49120394cbbb46aa1bf84bfb003c.scope - libcontainer container 288261d23cef5f1ba070fa6304330ceab89c49120394cbbb46aa1bf84bfb003c. Mar 7 00:54:06.796336 kubelet[2572]: E0307 00:54:06.796277 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.796641 kubelet[2572]: W0307 00:54:06.796567 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.797325 kubelet[2572]: E0307 00:54:06.797297 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.797959 kubelet[2572]: E0307 00:54:06.797893 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.797959 kubelet[2572]: W0307 00:54:06.797913 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.797959 kubelet[2572]: E0307 00:54:06.797932 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.798345 kubelet[2572]: E0307 00:54:06.798176 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.798345 kubelet[2572]: W0307 00:54:06.798186 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.798345 kubelet[2572]: E0307 00:54:06.798196 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.798345 kubelet[2572]: E0307 00:54:06.798344 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.798345 kubelet[2572]: W0307 00:54:06.798351 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.798656 kubelet[2572]: E0307 00:54:06.798360 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.798701 kubelet[2572]: E0307 00:54:06.798670 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.798701 kubelet[2572]: W0307 00:54:06.798680 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.798701 kubelet[2572]: E0307 00:54:06.798690 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.798895 kubelet[2572]: E0307 00:54:06.798872 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.798895 kubelet[2572]: W0307 00:54:06.798881 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.798895 kubelet[2572]: E0307 00:54:06.798891 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.800422 kubelet[2572]: E0307 00:54:06.799042 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.800422 kubelet[2572]: W0307 00:54:06.799077 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.800422 kubelet[2572]: E0307 00:54:06.799085 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.800422 kubelet[2572]: E0307 00:54:06.799242 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.800422 kubelet[2572]: W0307 00:54:06.799249 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.800422 kubelet[2572]: E0307 00:54:06.799257 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.800422 kubelet[2572]: E0307 00:54:06.799384 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.800422 kubelet[2572]: W0307 00:54:06.799390 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.800422 kubelet[2572]: E0307 00:54:06.799397 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.801443 kubelet[2572]: E0307 00:54:06.800586 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.801443 kubelet[2572]: W0307 00:54:06.800601 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.801443 kubelet[2572]: E0307 00:54:06.800619 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.801926 kubelet[2572]: E0307 00:54:06.801632 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.801926 kubelet[2572]: W0307 00:54:06.801649 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.801926 kubelet[2572]: E0307 00:54:06.801666 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.802979 kubelet[2572]: E0307 00:54:06.802809 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.802979 kubelet[2572]: W0307 00:54:06.802830 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.802979 kubelet[2572]: E0307 00:54:06.802847 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.803349 kubelet[2572]: E0307 00:54:06.803221 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.803349 kubelet[2572]: W0307 00:54:06.803234 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.803349 kubelet[2572]: E0307 00:54:06.803246 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.803529 kubelet[2572]: E0307 00:54:06.803517 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.803642 kubelet[2572]: W0307 00:54:06.803577 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.803642 kubelet[2572]: E0307 00:54:06.803591 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.804126 kubelet[2572]: E0307 00:54:06.804090 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.804126 kubelet[2572]: W0307 00:54:06.804103 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.804320 kubelet[2572]: E0307 00:54:06.804213 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.804549 kubelet[2572]: E0307 00:54:06.804538 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.804678 kubelet[2572]: W0307 00:54:06.804608 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.804678 kubelet[2572]: E0307 00:54:06.804623 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.804966 kubelet[2572]: E0307 00:54:06.804954 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.805116 kubelet[2572]: W0307 00:54:06.805015 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.805116 kubelet[2572]: E0307 00:54:06.805031 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.805696 kubelet[2572]: E0307 00:54:06.805656 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.806183 kubelet[2572]: W0307 00:54:06.805670 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.806183 kubelet[2572]: E0307 00:54:06.805909 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.806619 kubelet[2572]: E0307 00:54:06.806521 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.806619 kubelet[2572]: W0307 00:54:06.806535 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.806619 kubelet[2572]: E0307 00:54:06.806547 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.807623 kubelet[2572]: E0307 00:54:06.807435 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.807623 kubelet[2572]: W0307 00:54:06.807450 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.807623 kubelet[2572]: E0307 00:54:06.807463 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.807901 kubelet[2572]: E0307 00:54:06.807824 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.807901 kubelet[2572]: W0307 00:54:06.807837 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.807901 kubelet[2572]: E0307 00:54:06.807848 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.809036 kubelet[2572]: E0307 00:54:06.808900 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.809036 kubelet[2572]: W0307 00:54:06.808918 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.809036 kubelet[2572]: E0307 00:54:06.808935 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.810004 kubelet[2572]: E0307 00:54:06.809470 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.810004 kubelet[2572]: W0307 00:54:06.809486 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.810004 kubelet[2572]: E0307 00:54:06.809498 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.816091 kubelet[2572]: E0307 00:54:06.813419 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.816091 kubelet[2572]: W0307 00:54:06.813620 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.816091 kubelet[2572]: E0307 00:54:06.813642 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.816091 kubelet[2572]: E0307 00:54:06.814902 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.816091 kubelet[2572]: W0307 00:54:06.814918 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.816091 kubelet[2572]: E0307 00:54:06.814936 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.816295 containerd[1473]: time="2026-03-07T00:54:06.815781615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27n9r,Uid:2e578d86-21c9-4898-b829-79039078a84d,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:06.823657 kubelet[2572]: E0307 00:54:06.823631 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:06.823866 kubelet[2572]: W0307 00:54:06.823751 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:06.823998 kubelet[2572]: E0307 00:54:06.823928 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:06.831245 containerd[1473]: time="2026-03-07T00:54:06.831195047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-658979cb4f-pgppp,Uid:41cc6505-a39a-4134-8d9e-93a537122e4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"288261d23cef5f1ba070fa6304330ceab89c49120394cbbb46aa1bf84bfb003c\"" Mar 7 00:54:06.834991 containerd[1473]: time="2026-03-07T00:54:06.834838246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 00:54:06.856041 containerd[1473]: time="2026-03-07T00:54:06.855923435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:06.856041 containerd[1473]: time="2026-03-07T00:54:06.856002595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:06.856425 containerd[1473]: time="2026-03-07T00:54:06.856020275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:06.856425 containerd[1473]: time="2026-03-07T00:54:06.856364594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:06.876326 systemd[1]: Started cri-containerd-7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6.scope - libcontainer container 7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6. Mar 7 00:54:06.906378 containerd[1473]: time="2026-03-07T00:54:06.906335368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-27n9r,Uid:2e578d86-21c9-4898-b829-79039078a84d,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\"" Mar 7 00:54:08.141821 kubelet[2572]: E0307 00:54:08.139772 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:08.163814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount395914883.mount: Deactivated successfully. Mar 7 00:54:08.606371 containerd[1473]: time="2026-03-07T00:54:08.606308822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:08.607978 containerd[1473]: time="2026-03-07T00:54:08.607901381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 7 00:54:08.609284 containerd[1473]: time="2026-03-07T00:54:08.608995381Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:08.615001 containerd[1473]: time="2026-03-07T00:54:08.614935298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:08.616774 containerd[1473]: time="2026-03-07T00:54:08.616724417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.781726452s" Mar 7 00:54:08.617213 containerd[1473]: time="2026-03-07T00:54:08.616934737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 00:54:08.619990 containerd[1473]: time="2026-03-07T00:54:08.619597295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 00:54:08.640597 containerd[1473]: time="2026-03-07T00:54:08.640539845Z" level=info msg="CreateContainer within sandbox \"288261d23cef5f1ba070fa6304330ceab89c49120394cbbb46aa1bf84bfb003c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 00:54:08.662278 containerd[1473]: time="2026-03-07T00:54:08.662215474Z" level=info msg="CreateContainer within sandbox \"288261d23cef5f1ba070fa6304330ceab89c49120394cbbb46aa1bf84bfb003c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17e15839eb42cc4e860db9aa1cec97d0fface5f6aab6f6c0396d79108cd40c88\"" Mar 7 00:54:08.665135 containerd[1473]: time="2026-03-07T00:54:08.662820913Z" level=info msg="StartContainer for \"17e15839eb42cc4e860db9aa1cec97d0fface5f6aab6f6c0396d79108cd40c88\"" Mar 7 00:54:08.702304 systemd[1]: Started cri-containerd-17e15839eb42cc4e860db9aa1cec97d0fface5f6aab6f6c0396d79108cd40c88.scope - libcontainer container 17e15839eb42cc4e860db9aa1cec97d0fface5f6aab6f6c0396d79108cd40c88. Mar 7 00:54:08.741139 containerd[1473]: time="2026-03-07T00:54:08.740963354Z" level=info msg="StartContainer for \"17e15839eb42cc4e860db9aa1cec97d0fface5f6aab6f6c0396d79108cd40c88\" returns successfully" Mar 7 00:54:09.287463 kubelet[2572]: I0307 00:54:09.286109 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-658979cb4f-pgppp" podStartSLOduration=1.5003754329999999 podStartE2EDuration="3.286085722s" podCreationTimestamp="2026-03-07 00:54:06 +0000 UTC" firstStartedPulling="2026-03-07 00:54:06.833697006 +0000 UTC m=+22.840236647" lastFinishedPulling="2026-03-07 00:54:08.619407295 +0000 UTC m=+24.625946936" observedRunningTime="2026-03-07 00:54:09.284682003 +0000 UTC m=+25.291221684" watchObservedRunningTime="2026-03-07 00:54:09.286085722 +0000 UTC m=+25.292625403" Mar 7 00:54:09.293390 kubelet[2572]: E0307 00:54:09.293317 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.293390 kubelet[2572]: W0307 00:54:09.293379 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.293552 kubelet[2572]: E0307 00:54:09.293405 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.294026 kubelet[2572]: E0307 00:54:09.294002 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.294380 kubelet[2572]: W0307 00:54:09.294026 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.294380 kubelet[2572]: E0307 00:54:09.294276 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.294678 kubelet[2572]: E0307 00:54:09.294642 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.294678 kubelet[2572]: W0307 00:54:09.294671 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.294798 kubelet[2572]: E0307 00:54:09.294687 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.295387 kubelet[2572]: E0307 00:54:09.295284 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.295387 kubelet[2572]: W0307 00:54:09.295300 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.295387 kubelet[2572]: E0307 00:54:09.295314 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.296224 kubelet[2572]: E0307 00:54:09.296204 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.296224 kubelet[2572]: W0307 00:54:09.296220 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.296500 kubelet[2572]: E0307 00:54:09.296245 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.296612 kubelet[2572]: E0307 00:54:09.296597 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.296612 kubelet[2572]: W0307 00:54:09.296610 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.296975 kubelet[2572]: E0307 00:54:09.296621 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.296975 kubelet[2572]: E0307 00:54:09.296935 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.296975 kubelet[2572]: W0307 00:54:09.296946 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.296975 kubelet[2572]: E0307 00:54:09.296957 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.297408 kubelet[2572]: E0307 00:54:09.297390 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.297408 kubelet[2572]: W0307 00:54:09.297402 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.298212 kubelet[2572]: E0307 00:54:09.297417 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.298708 kubelet[2572]: E0307 00:54:09.298606 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.298708 kubelet[2572]: W0307 00:54:09.298624 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.298708 kubelet[2572]: E0307 00:54:09.298638 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.299133 kubelet[2572]: E0307 00:54:09.299116 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.299133 kubelet[2572]: W0307 00:54:09.299132 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.299308 kubelet[2572]: E0307 00:54:09.299144 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.299373 kubelet[2572]: E0307 00:54:09.299363 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.299406 kubelet[2572]: W0307 00:54:09.299373 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.299406 kubelet[2572]: E0307 00:54:09.299381 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.299562 kubelet[2572]: E0307 00:54:09.299552 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.299604 kubelet[2572]: W0307 00:54:09.299562 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.299604 kubelet[2572]: E0307 00:54:09.299570 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.299883 kubelet[2572]: E0307 00:54:09.299864 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.299883 kubelet[2572]: W0307 00:54:09.299879 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.299992 kubelet[2572]: E0307 00:54:09.299891 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.300101 kubelet[2572]: E0307 00:54:09.300090 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.300134 kubelet[2572]: W0307 00:54:09.300100 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.300134 kubelet[2572]: E0307 00:54:09.300112 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.300434 kubelet[2572]: E0307 00:54:09.300304 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.300434 kubelet[2572]: W0307 00:54:09.300313 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.300434 kubelet[2572]: E0307 00:54:09.300323 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.317360 kubelet[2572]: E0307 00:54:09.317098 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.317360 kubelet[2572]: W0307 00:54:09.317131 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.317360 kubelet[2572]: E0307 00:54:09.317169 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.317686 kubelet[2572]: E0307 00:54:09.317664 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.317814 kubelet[2572]: W0307 00:54:09.317795 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.317906 kubelet[2572]: E0307 00:54:09.317888 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.318415 kubelet[2572]: E0307 00:54:09.318366 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.318415 kubelet[2572]: W0307 00:54:09.318397 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.318415 kubelet[2572]: E0307 00:54:09.318418 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.318719 kubelet[2572]: E0307 00:54:09.318684 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.318719 kubelet[2572]: W0307 00:54:09.318717 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.318816 kubelet[2572]: E0307 00:54:09.318736 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.319037 kubelet[2572]: E0307 00:54:09.319002 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.319037 kubelet[2572]: W0307 00:54:09.319016 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.319037 kubelet[2572]: E0307 00:54:09.319034 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.319411 kubelet[2572]: E0307 00:54:09.319394 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.319411 kubelet[2572]: W0307 00:54:09.319410 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.319492 kubelet[2572]: E0307 00:54:09.319426 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.321283 kubelet[2572]: E0307 00:54:09.320536 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.321283 kubelet[2572]: W0307 00:54:09.320566 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.321283 kubelet[2572]: E0307 00:54:09.320584 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.321283 kubelet[2572]: E0307 00:54:09.320931 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.321283 kubelet[2572]: W0307 00:54:09.320947 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.321283 kubelet[2572]: E0307 00:54:09.320960 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.321727 kubelet[2572]: E0307 00:54:09.321683 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.321727 kubelet[2572]: W0307 00:54:09.321721 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.321806 kubelet[2572]: E0307 00:54:09.321740 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.322104 kubelet[2572]: E0307 00:54:09.322088 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.322104 kubelet[2572]: W0307 00:54:09.322104 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.322208 kubelet[2572]: E0307 00:54:09.322117 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.322362 kubelet[2572]: E0307 00:54:09.322333 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.322362 kubelet[2572]: W0307 00:54:09.322344 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.322362 kubelet[2572]: E0307 00:54:09.322355 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.323368 kubelet[2572]: E0307 00:54:09.322544 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.323368 kubelet[2572]: W0307 00:54:09.322568 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.323368 kubelet[2572]: E0307 00:54:09.322579 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.323368 kubelet[2572]: E0307 00:54:09.322800 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.323368 kubelet[2572]: W0307 00:54:09.322809 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.323368 kubelet[2572]: E0307 00:54:09.322820 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.323368 kubelet[2572]: E0307 00:54:09.323191 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.323368 kubelet[2572]: W0307 00:54:09.323202 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.323368 kubelet[2572]: E0307 00:54:09.323214 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.323972 kubelet[2572]: E0307 00:54:09.323938 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.323972 kubelet[2572]: W0307 00:54:09.323961 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.323972 kubelet[2572]: E0307 00:54:09.323975 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.324241 kubelet[2572]: E0307 00:54:09.324227 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.324241 kubelet[2572]: W0307 00:54:09.324239 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.324297 kubelet[2572]: E0307 00:54:09.324249 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.324460 kubelet[2572]: E0307 00:54:09.324451 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.324460 kubelet[2572]: W0307 00:54:09.324461 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.324527 kubelet[2572]: E0307 00:54:09.324469 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.324899 kubelet[2572]: E0307 00:54:09.324881 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:54:09.324899 kubelet[2572]: W0307 00:54:09.324897 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:54:09.324984 kubelet[2572]: E0307 00:54:09.324908 2572 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:54:09.630000 systemd[1]: run-containerd-runc-k8s.io-17e15839eb42cc4e860db9aa1cec97d0fface5f6aab6f6c0396d79108cd40c88-runc.iv8wsM.mount: Deactivated successfully. Mar 7 00:54:09.976651 containerd[1473]: time="2026-03-07T00:54:09.976508100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:09.977762 containerd[1473]: time="2026-03-07T00:54:09.977703979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 7 00:54:09.979077 containerd[1473]: time="2026-03-07T00:54:09.978978658Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:09.981809 containerd[1473]: time="2026-03-07T00:54:09.981756217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:09.983659 containerd[1473]: time="2026-03-07T00:54:09.982906256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.363260761s" Mar 7 00:54:09.983659 containerd[1473]: time="2026-03-07T00:54:09.982947376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 00:54:09.988961 containerd[1473]: time="2026-03-07T00:54:09.988909853Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 00:54:10.008860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4074676102.mount: Deactivated successfully. Mar 7 00:54:10.019245 containerd[1473]: time="2026-03-07T00:54:10.019181919Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff\"" Mar 7 00:54:10.022114 containerd[1473]: time="2026-03-07T00:54:10.020207758Z" level=info msg="StartContainer for \"53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff\"" Mar 7 00:54:10.061380 systemd[1]: Started cri-containerd-53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff.scope - libcontainer container 53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff. Mar 7 00:54:10.095790 containerd[1473]: time="2026-03-07T00:54:10.095739521Z" level=info msg="StartContainer for \"53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff\" returns successfully" Mar 7 00:54:10.116617 systemd[1]: cri-containerd-53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff.scope: Deactivated successfully. Mar 7 00:54:10.140535 kubelet[2572]: E0307 00:54:10.140461 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:10.261591 containerd[1473]: time="2026-03-07T00:54:10.261266040Z" level=info msg="shim disconnected" id=53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff namespace=k8s.io Mar 7 00:54:10.261591 containerd[1473]: time="2026-03-07T00:54:10.261324000Z" level=warning msg="cleaning up after shim disconnected" id=53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff namespace=k8s.io Mar 7 00:54:10.261591 containerd[1473]: time="2026-03-07T00:54:10.261332680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:10.274970 kubelet[2572]: I0307 00:54:10.274749 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:10.626969 systemd[1]: run-containerd-runc-k8s.io-53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff-runc.EgaABC.mount: Deactivated successfully. Mar 7 00:54:10.627118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53d5245b5c83d8e0b7dbb47aa807aef95cd842637266374cf41137336ef2fbff-rootfs.mount: Deactivated successfully. Mar 7 00:54:11.281023 containerd[1473]: time="2026-03-07T00:54:11.280893025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 00:54:12.140518 kubelet[2572]: E0307 00:54:12.140122 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:14.140753 kubelet[2572]: E0307 00:54:14.140694 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:16.141019 kubelet[2572]: E0307 00:54:16.139649 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:17.303685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857993036.mount: Deactivated successfully. Mar 7 00:54:17.337104 containerd[1473]: time="2026-03-07T00:54:17.335289293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:17.338321 containerd[1473]: time="2026-03-07T00:54:17.338265852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 00:54:17.341444 containerd[1473]: time="2026-03-07T00:54:17.341390250Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:17.345165 containerd[1473]: time="2026-03-07T00:54:17.344591649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:17.346359 containerd[1473]: time="2026-03-07T00:54:17.346309168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.065368103s" Mar 7 00:54:17.346359 containerd[1473]: time="2026-03-07T00:54:17.346349088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 00:54:17.353267 containerd[1473]: time="2026-03-07T00:54:17.353074805Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 00:54:17.370202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910017059.mount: Deactivated successfully. Mar 7 00:54:17.379508 containerd[1473]: time="2026-03-07T00:54:17.379456673Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc\"" Mar 7 00:54:17.381362 containerd[1473]: time="2026-03-07T00:54:17.381315032Z" level=info msg="StartContainer for \"57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc\"" Mar 7 00:54:17.426341 systemd[1]: Started cri-containerd-57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc.scope - libcontainer container 57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc. Mar 7 00:54:17.462384 containerd[1473]: time="2026-03-07T00:54:17.462340836Z" level=info msg="StartContainer for \"57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc\" returns successfully" Mar 7 00:54:17.567182 systemd[1]: cri-containerd-57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc.scope: Deactivated successfully. Mar 7 00:54:17.758168 containerd[1473]: time="2026-03-07T00:54:17.758100904Z" level=info msg="shim disconnected" id=57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc namespace=k8s.io Mar 7 00:54:17.758485 containerd[1473]: time="2026-03-07T00:54:17.758458024Z" level=warning msg="cleaning up after shim disconnected" id=57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc namespace=k8s.io Mar 7 00:54:17.758589 containerd[1473]: time="2026-03-07T00:54:17.758567823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:17.773897 containerd[1473]: time="2026-03-07T00:54:17.773846617Z" level=warning msg="cleanup warnings time=\"2026-03-07T00:54:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 00:54:18.141270 kubelet[2572]: E0307 00:54:18.141217 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:18.303996 containerd[1473]: time="2026-03-07T00:54:18.303743461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 00:54:18.306864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57a0a7ed585530c1e3219cc32f7ef31338ffac04ab7783b383a97c22667181dc-rootfs.mount: Deactivated successfully. Mar 7 00:54:20.139980 kubelet[2572]: E0307 00:54:20.139247 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:21.396285 kubelet[2572]: I0307 00:54:21.396230 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:21.797952 containerd[1473]: time="2026-03-07T00:54:21.797770374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:21.799949 containerd[1473]: time="2026-03-07T00:54:21.799870053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 00:54:21.801029 containerd[1473]: time="2026-03-07T00:54:21.800495933Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:21.803877 containerd[1473]: time="2026-03-07T00:54:21.803836612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:21.804826 containerd[1473]: time="2026-03-07T00:54:21.804786931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.50099435s" Mar 7 00:54:21.804949 containerd[1473]: time="2026-03-07T00:54:21.804927731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 00:54:21.811239 containerd[1473]: time="2026-03-07T00:54:21.811200369Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 00:54:21.831271 containerd[1473]: time="2026-03-07T00:54:21.831219000Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469\"" Mar 7 00:54:21.832301 containerd[1473]: time="2026-03-07T00:54:21.832233480Z" level=info msg="StartContainer for \"280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469\"" Mar 7 00:54:21.872444 systemd[1]: Started cri-containerd-280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469.scope - libcontainer container 280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469. Mar 7 00:54:21.909461 containerd[1473]: time="2026-03-07T00:54:21.909314166Z" level=info msg="StartContainer for \"280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469\" returns successfully" Mar 7 00:54:22.141239 kubelet[2572]: E0307 00:54:22.141182 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p8nh" podUID="127a57c4-5bc8-40d4-8acc-8570b2710eaf" Mar 7 00:54:22.606858 systemd[1]: cri-containerd-280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469.scope: Deactivated successfully. Mar 7 00:54:22.638285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469-rootfs.mount: Deactivated successfully. Mar 7 00:54:22.640905 containerd[1473]: time="2026-03-07T00:54:22.640846906Z" level=info msg="shim disconnected" id=280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469 namespace=k8s.io Mar 7 00:54:22.641326 containerd[1473]: time="2026-03-07T00:54:22.641150067Z" level=warning msg="cleaning up after shim disconnected" id=280633ad1bea9182ff01e65ee12a359a047d95ee6c81dbac24f13367ce24e469 namespace=k8s.io Mar 7 00:54:22.641326 containerd[1473]: time="2026-03-07T00:54:22.641167467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:54:22.690278 kubelet[2572]: I0307 00:54:22.690237 2572 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 00:54:22.780247 systemd[1]: Created slice kubepods-burstable-pode871832b_09ba_4e7c_a062_f6bcd18f87c8.slice - libcontainer container kubepods-burstable-pode871832b_09ba_4e7c_a062_f6bcd18f87c8.slice. Mar 7 00:54:22.790096 systemd[1]: Created slice kubepods-burstable-podea70a10d_7ce3_4275_9b2c_469af8e4ffc8.slice - libcontainer container kubepods-burstable-podea70a10d_7ce3_4275_9b2c_469af8e4ffc8.slice. Mar 7 00:54:22.803861 systemd[1]: Created slice kubepods-besteffort-pod2aea807c_a68a_4974_811d_20e6d9c78bc0.slice - libcontainer container kubepods-besteffort-pod2aea807c_a68a_4974_811d_20e6d9c78bc0.slice. Mar 7 00:54:22.817973 kubelet[2572]: E0307 00:54:22.816343 2572 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081-3-6-n-2a659a64a8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-6-n-2a659a64a8' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"calico-apiserver-certs\"" type="*v1.Secret" Mar 7 00:54:22.820071 kubelet[2572]: I0307 00:54:22.818702 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-nginx-config\") pod \"whisker-77997bc986-2dg5x\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " pod="calico-system/whisker-77997bc986-2dg5x" Mar 7 00:54:22.820375 kubelet[2572]: I0307 00:54:22.820351 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-ca-bundle\") pod \"whisker-77997bc986-2dg5x\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " pod="calico-system/whisker-77997bc986-2dg5x" Mar 7 00:54:22.820501 kubelet[2572]: I0307 00:54:22.820487 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vk94\" (UniqueName: \"kubernetes.io/projected/2aea807c-a68a-4974-811d-20e6d9c78bc0-kube-api-access-8vk94\") pod \"whisker-77997bc986-2dg5x\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " pod="calico-system/whisker-77997bc986-2dg5x" Mar 7 00:54:22.820657 kubelet[2572]: I0307 00:54:22.820602 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkcg\" (UniqueName: \"kubernetes.io/projected/e871832b-09ba-4e7c-a062-f6bcd18f87c8-kube-api-access-mbkcg\") pod \"coredns-674b8bbfcf-mjqpf\" (UID: \"e871832b-09ba-4e7c-a062-f6bcd18f87c8\") " pod="kube-system/coredns-674b8bbfcf-mjqpf" Mar 7 00:54:22.820751 kubelet[2572]: I0307 00:54:22.820730 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-backend-key-pair\") pod \"whisker-77997bc986-2dg5x\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " pod="calico-system/whisker-77997bc986-2dg5x" Mar 7 00:54:22.820840 kubelet[2572]: I0307 00:54:22.820826 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhfsp\" (UniqueName: \"kubernetes.io/projected/ea70a10d-7ce3-4275-9b2c-469af8e4ffc8-kube-api-access-rhfsp\") pod \"coredns-674b8bbfcf-q7gff\" (UID: \"ea70a10d-7ce3-4275-9b2c-469af8e4ffc8\") " pod="kube-system/coredns-674b8bbfcf-q7gff" Mar 7 00:54:22.821040 kubelet[2572]: I0307 00:54:22.820979 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e871832b-09ba-4e7c-a062-f6bcd18f87c8-config-volume\") pod \"coredns-674b8bbfcf-mjqpf\" (UID: \"e871832b-09ba-4e7c-a062-f6bcd18f87c8\") " pod="kube-system/coredns-674b8bbfcf-mjqpf" Mar 7 00:54:22.821171 kubelet[2572]: I0307 00:54:22.821156 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea70a10d-7ce3-4275-9b2c-469af8e4ffc8-config-volume\") pod \"coredns-674b8bbfcf-q7gff\" (UID: \"ea70a10d-7ce3-4275-9b2c-469af8e4ffc8\") " pod="kube-system/coredns-674b8bbfcf-q7gff" Mar 7 00:54:22.831539 systemd[1]: Created slice kubepods-besteffort-pod3060f919_958a_4697_b760_500d72a51d7d.slice - libcontainer container kubepods-besteffort-pod3060f919_958a_4697_b760_500d72a51d7d.slice. Mar 7 00:54:22.851549 systemd[1]: Created slice kubepods-besteffort-pod12a36780_8f9b_49b6_ae7a_05cc5d985d69.slice - libcontainer container kubepods-besteffort-pod12a36780_8f9b_49b6_ae7a_05cc5d985d69.slice. Mar 7 00:54:22.861737 systemd[1]: Created slice kubepods-besteffort-poddb1c0333_fb89_4a9a_b9e2_a32a5d4a41cf.slice - libcontainer container kubepods-besteffort-poddb1c0333_fb89_4a9a_b9e2_a32a5d4a41cf.slice. Mar 7 00:54:22.872248 systemd[1]: Created slice kubepods-besteffort-pod09e69c3c_3f2b_493c_a453_9280256b7c38.slice - libcontainer container kubepods-besteffort-pod09e69c3c_3f2b_493c_a453_9280256b7c38.slice. Mar 7 00:54:22.922151 kubelet[2572]: I0307 00:54:22.921888 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12a36780-8f9b-49b6-ae7a-05cc5d985d69-config\") pod \"goldmane-5b85766d88-6gzpk\" (UID: \"12a36780-8f9b-49b6-ae7a-05cc5d985d69\") " pod="calico-system/goldmane-5b85766d88-6gzpk" Mar 7 00:54:22.922151 kubelet[2572]: I0307 00:54:22.921959 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf-calico-apiserver-certs\") pod \"calico-apiserver-54959b7f66-fgbzm\" (UID: \"db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf\") " pod="calico-system/calico-apiserver-54959b7f66-fgbzm" Mar 7 00:54:22.922151 kubelet[2572]: I0307 00:54:22.921982 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12a36780-8f9b-49b6-ae7a-05cc5d985d69-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-6gzpk\" (UID: \"12a36780-8f9b-49b6-ae7a-05cc5d985d69\") " pod="calico-system/goldmane-5b85766d88-6gzpk" Mar 7 00:54:22.922151 kubelet[2572]: I0307 00:54:22.922087 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09e69c3c-3f2b-493c-a453-9280256b7c38-tigera-ca-bundle\") pod \"calico-kube-controllers-559d85bfb5-ppgz2\" (UID: \"09e69c3c-3f2b-493c-a453-9280256b7c38\") " pod="calico-system/calico-kube-controllers-559d85bfb5-ppgz2" Mar 7 00:54:22.922151 kubelet[2572]: I0307 00:54:22.922107 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tcb4\" (UniqueName: \"kubernetes.io/projected/db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf-kube-api-access-7tcb4\") pod \"calico-apiserver-54959b7f66-fgbzm\" (UID: \"db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf\") " pod="calico-system/calico-apiserver-54959b7f66-fgbzm" Mar 7 00:54:22.922872 kubelet[2572]: I0307 00:54:22.922151 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3060f919-958a-4697-b760-500d72a51d7d-calico-apiserver-certs\") pod \"calico-apiserver-54959b7f66-h654f\" (UID: \"3060f919-958a-4697-b760-500d72a51d7d\") " pod="calico-system/calico-apiserver-54959b7f66-h654f" Mar 7 00:54:22.922872 kubelet[2572]: I0307 00:54:22.922169 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwlpb\" (UniqueName: \"kubernetes.io/projected/3060f919-958a-4697-b760-500d72a51d7d-kube-api-access-fwlpb\") pod \"calico-apiserver-54959b7f66-h654f\" (UID: \"3060f919-958a-4697-b760-500d72a51d7d\") " pod="calico-system/calico-apiserver-54959b7f66-h654f" Mar 7 00:54:22.922872 kubelet[2572]: I0307 00:54:22.922187 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgzqg\" (UniqueName: \"kubernetes.io/projected/12a36780-8f9b-49b6-ae7a-05cc5d985d69-kube-api-access-sgzqg\") pod \"goldmane-5b85766d88-6gzpk\" (UID: \"12a36780-8f9b-49b6-ae7a-05cc5d985d69\") " pod="calico-system/goldmane-5b85766d88-6gzpk" Mar 7 00:54:22.922872 kubelet[2572]: I0307 00:54:22.922205 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4wk2\" (UniqueName: \"kubernetes.io/projected/09e69c3c-3f2b-493c-a453-9280256b7c38-kube-api-access-p4wk2\") pod \"calico-kube-controllers-559d85bfb5-ppgz2\" (UID: \"09e69c3c-3f2b-493c-a453-9280256b7c38\") " pod="calico-system/calico-kube-controllers-559d85bfb5-ppgz2" Mar 7 00:54:22.922872 kubelet[2572]: I0307 00:54:22.922224 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/12a36780-8f9b-49b6-ae7a-05cc5d985d69-goldmane-key-pair\") pod \"goldmane-5b85766d88-6gzpk\" (UID: \"12a36780-8f9b-49b6-ae7a-05cc5d985d69\") " pod="calico-system/goldmane-5b85766d88-6gzpk" Mar 7 00:54:23.086355 containerd[1473]: time="2026-03-07T00:54:23.086278702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjqpf,Uid:e871832b-09ba-4e7c-a062-f6bcd18f87c8,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:23.097822 containerd[1473]: time="2026-03-07T00:54:23.097761068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7gff,Uid:ea70a10d-7ce3-4275-9b2c-469af8e4ffc8,Namespace:kube-system,Attempt:0,}" Mar 7 00:54:23.120026 containerd[1473]: time="2026-03-07T00:54:23.119671188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77997bc986-2dg5x,Uid:2aea807c-a68a-4974-811d-20e6d9c78bc0,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:23.160017 containerd[1473]: time="2026-03-07T00:54:23.159909972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6gzpk,Uid:12a36780-8f9b-49b6-ae7a-05cc5d985d69,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:23.176956 containerd[1473]: time="2026-03-07T00:54:23.176692959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559d85bfb5-ppgz2,Uid:09e69c3c-3f2b-493c-a453-9280256b7c38,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:23.266259 containerd[1473]: time="2026-03-07T00:54:23.266196532Z" level=error msg="Failed to destroy network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.267390 containerd[1473]: time="2026-03-07T00:54:23.267330308Z" level=error msg="encountered an error cleaning up failed sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.268820 containerd[1473]: time="2026-03-07T00:54:23.267411152Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjqpf,Uid:e871832b-09ba-4e7c-a062-f6bcd18f87c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.268947 kubelet[2572]: E0307 00:54:23.267644 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.268947 kubelet[2572]: E0307 00:54:23.267719 2572 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mjqpf" Mar 7 00:54:23.268947 kubelet[2572]: E0307 00:54:23.267742 2572 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mjqpf" Mar 7 00:54:23.269035 kubelet[2572]: E0307 00:54:23.267794 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mjqpf_kube-system(e871832b-09ba-4e7c-a062-f6bcd18f87c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mjqpf_kube-system(e871832b-09ba-4e7c-a062-f6bcd18f87c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mjqpf" podUID="e871832b-09ba-4e7c-a062-f6bcd18f87c8" Mar 7 00:54:23.304830 containerd[1473]: time="2026-03-07T00:54:23.304781994Z" level=error msg="Failed to destroy network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.305980 containerd[1473]: time="2026-03-07T00:54:23.305783323Z" level=error msg="encountered an error cleaning up failed sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.305980 containerd[1473]: time="2026-03-07T00:54:23.305864287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77997bc986-2dg5x,Uid:2aea807c-a68a-4974-811d-20e6d9c78bc0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.306672 kubelet[2572]: E0307 00:54:23.306361 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.306672 kubelet[2572]: E0307 00:54:23.306452 2572 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77997bc986-2dg5x" Mar 7 00:54:23.306672 kubelet[2572]: E0307 00:54:23.306489 2572 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77997bc986-2dg5x" Mar 7 00:54:23.308393 kubelet[2572]: E0307 00:54:23.306548 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77997bc986-2dg5x_calico-system(2aea807c-a68a-4974-811d-20e6d9c78bc0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77997bc986-2dg5x_calico-system(2aea807c-a68a-4974-811d-20e6d9c78bc0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77997bc986-2dg5x" podUID="2aea807c-a68a-4974-811d-20e6d9c78bc0" Mar 7 00:54:23.326616 containerd[1473]: time="2026-03-07T00:54:23.326440022Z" level=error msg="Failed to destroy network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.328515 containerd[1473]: time="2026-03-07T00:54:23.328336515Z" level=error msg="encountered an error cleaning up failed sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.328515 containerd[1473]: time="2026-03-07T00:54:23.328405239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7gff,Uid:ea70a10d-7ce3-4275-9b2c-469af8e4ffc8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.330653 kubelet[2572]: E0307 00:54:23.329360 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.330653 kubelet[2572]: E0307 00:54:23.329420 2572 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7gff" Mar 7 00:54:23.330653 kubelet[2572]: E0307 00:54:23.329442 2572 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q7gff" Mar 7 00:54:23.330824 kubelet[2572]: E0307 00:54:23.329507 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q7gff_kube-system(ea70a10d-7ce3-4275-9b2c-469af8e4ffc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q7gff_kube-system(ea70a10d-7ce3-4275-9b2c-469af8e4ffc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q7gff" podUID="ea70a10d-7ce3-4275-9b2c-469af8e4ffc8" Mar 7 00:54:23.330824 kubelet[2572]: I0307 00:54:23.330615 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:23.333066 containerd[1473]: time="2026-03-07T00:54:23.333016066Z" level=info msg="StopPodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\"" Mar 7 00:54:23.333879 kubelet[2572]: I0307 00:54:23.333829 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:23.334020 containerd[1473]: time="2026-03-07T00:54:23.333451487Z" level=info msg="Ensure that sandbox 064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d in task-service has been cleanup successfully" Mar 7 00:54:23.334436 containerd[1473]: time="2026-03-07T00:54:23.334402214Z" level=info msg="StopPodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\"" Mar 7 00:54:23.334653 containerd[1473]: time="2026-03-07T00:54:23.334616465Z" level=info msg="Ensure that sandbox 51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0 in task-service has been cleanup successfully" Mar 7 00:54:23.351521 containerd[1473]: time="2026-03-07T00:54:23.351428854Z" level=error msg="Failed to destroy network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.354126 containerd[1473]: time="2026-03-07T00:54:23.352790481Z" level=error msg="encountered an error cleaning up failed sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.354126 containerd[1473]: time="2026-03-07T00:54:23.352857844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559d85bfb5-ppgz2,Uid:09e69c3c-3f2b-493c-a453-9280256b7c38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.359120 kubelet[2572]: E0307 00:54:23.358549 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.359120 kubelet[2572]: E0307 00:54:23.358610 2572 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-559d85bfb5-ppgz2" Mar 7 00:54:23.359120 kubelet[2572]: E0307 00:54:23.358632 2572 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-559d85bfb5-ppgz2" Mar 7 00:54:23.359327 kubelet[2572]: E0307 00:54:23.358686 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-559d85bfb5-ppgz2_calico-system(09e69c3c-3f2b-493c-a453-9280256b7c38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-559d85bfb5-ppgz2_calico-system(09e69c3c-3f2b-493c-a453-9280256b7c38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-559d85bfb5-ppgz2" podUID="09e69c3c-3f2b-493c-a453-9280256b7c38" Mar 7 00:54:23.382099 containerd[1473]: time="2026-03-07T00:54:23.381531058Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 00:54:23.392922 containerd[1473]: time="2026-03-07T00:54:23.391317540Z" level=error msg="Failed to destroy network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.402235 containerd[1473]: time="2026-03-07T00:54:23.402143754Z" level=error msg="encountered an error cleaning up failed sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.402358 containerd[1473]: time="2026-03-07T00:54:23.402255680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6gzpk,Uid:12a36780-8f9b-49b6-ae7a-05cc5d985d69,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.402785 kubelet[2572]: E0307 00:54:23.402621 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.402785 kubelet[2572]: E0307 00:54:23.402681 2572 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-6gzpk" Mar 7 00:54:23.402785 kubelet[2572]: E0307 00:54:23.402703 2572 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-6gzpk" Mar 7 00:54:23.404457 kubelet[2572]: E0307 00:54:23.402962 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-6gzpk_calico-system(12a36780-8f9b-49b6-ae7a-05cc5d985d69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-6gzpk_calico-system(12a36780-8f9b-49b6-ae7a-05cc5d985d69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-6gzpk" podUID="12a36780-8f9b-49b6-ae7a-05cc5d985d69" Mar 7 00:54:23.420517 containerd[1473]: time="2026-03-07T00:54:23.420428215Z" level=error msg="StopPodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" failed" error="failed to destroy network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.421042 containerd[1473]: time="2026-03-07T00:54:23.420805114Z" level=error msg="StopPodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" failed" error="failed to destroy network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:54:23.421395 kubelet[2572]: E0307 00:54:23.420999 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:23.421395 kubelet[2572]: E0307 00:54:23.420999 2572 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:23.421395 kubelet[2572]: E0307 00:54:23.421198 2572 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d"} Mar 7 00:54:23.421395 kubelet[2572]: E0307 00:54:23.421292 2572 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2aea807c-a68a-4974-811d-20e6d9c78bc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:23.421395 kubelet[2572]: E0307 00:54:23.421237 2572 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0"} Mar 7 00:54:23.421590 kubelet[2572]: E0307 00:54:23.421343 2572 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e871832b-09ba-4e7c-a062-f6bcd18f87c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 7 00:54:23.421590 kubelet[2572]: E0307 00:54:23.421363 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e871832b-09ba-4e7c-a062-f6bcd18f87c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mjqpf" podUID="e871832b-09ba-4e7c-a062-f6bcd18f87c8" Mar 7 00:54:23.421590 kubelet[2572]: E0307 00:54:23.421409 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2aea807c-a68a-4974-811d-20e6d9c78bc0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77997bc986-2dg5x" podUID="2aea807c-a68a-4974-811d-20e6d9c78bc0" Mar 7 00:54:23.434664 containerd[1473]: time="2026-03-07T00:54:23.434435626Z" level=info msg="CreateContainer within sandbox \"7e748d434672db57c86bb30067fceeded715eee42bda7b1fdb17a320c1e2d0f6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9349d5f4ee9eac7622a9bb74ce94f2b8908e273fed82ef71220fcf485d17f5a2\"" Mar 7 00:54:23.436869 containerd[1473]: time="2026-03-07T00:54:23.435340071Z" level=info msg="StartContainer for \"9349d5f4ee9eac7622a9bb74ce94f2b8908e273fed82ef71220fcf485d17f5a2\"" Mar 7 00:54:23.473371 systemd[1]: Started cri-containerd-9349d5f4ee9eac7622a9bb74ce94f2b8908e273fed82ef71220fcf485d17f5a2.scope - libcontainer container 9349d5f4ee9eac7622a9bb74ce94f2b8908e273fed82ef71220fcf485d17f5a2. Mar 7 00:54:23.512471 containerd[1473]: time="2026-03-07T00:54:23.512405590Z" level=info msg="StartContainer for \"9349d5f4ee9eac7622a9bb74ce94f2b8908e273fed82ef71220fcf485d17f5a2\" returns successfully" Mar 7 00:54:24.027340 kubelet[2572]: E0307 00:54:24.025421 2572 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Mar 7 00:54:24.027340 kubelet[2572]: E0307 00:54:24.025563 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf-calico-apiserver-certs podName:db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf nodeName:}" failed. No retries permitted until 2026-03-07 00:54:24.525536734 +0000 UTC m=+40.532076415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf-calico-apiserver-certs") pod "calico-apiserver-54959b7f66-fgbzm" (UID: "db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf") : failed to sync secret cache: timed out waiting for the condition Mar 7 00:54:24.030215 kubelet[2572]: E0307 00:54:24.030087 2572 secret.go:189] Couldn't get secret calico-system/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Mar 7 00:54:24.030215 kubelet[2572]: E0307 00:54:24.030177 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3060f919-958a-4697-b760-500d72a51d7d-calico-apiserver-certs podName:3060f919-958a-4697-b760-500d72a51d7d nodeName:}" failed. No retries permitted until 2026-03-07 00:54:24.530158035 +0000 UTC m=+40.536697676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3060f919-958a-4697-b760-500d72a51d7d-calico-apiserver-certs") pod "calico-apiserver-54959b7f66-h654f" (UID: "3060f919-958a-4697-b760-500d72a51d7d") : failed to sync secret cache: timed out waiting for the condition Mar 7 00:54:24.152920 systemd[1]: Created slice kubepods-besteffort-pod127a57c4_5bc8_40d4_8acc_8570b2710eaf.slice - libcontainer container kubepods-besteffort-pod127a57c4_5bc8_40d4_8acc_8570b2710eaf.slice. Mar 7 00:54:24.157000 containerd[1473]: time="2026-03-07T00:54:24.156945711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8p8nh,Uid:127a57c4-5bc8-40d4-8acc-8570b2710eaf,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:24.361849 systemd-networkd[1365]: calice2856210e8: Link UP Mar 7 00:54:24.362143 systemd-networkd[1365]: calice2856210e8: Gained carrier Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.196 [ERROR][3643] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.227 [INFO][3643] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0 csi-node-driver- calico-system 127a57c4-5bc8-40d4-8acc-8570b2710eaf 705 0 2026-03-07 00:54:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 csi-node-driver-8p8nh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calice2856210e8 [] [] }} ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.228 [INFO][3643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.283 [INFO][3655] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" HandleID="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Workload="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.299 [INFO][3655] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" HandleID="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Workload="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"csi-node-driver-8p8nh", "timestamp":"2026-03-07 00:54:24.283968878 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e9b80)} Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.300 [INFO][3655] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.300 [INFO][3655] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.300 [INFO][3655] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.305 [INFO][3655] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.314 [INFO][3655] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.320 [INFO][3655] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.323 [INFO][3655] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.326 [INFO][3655] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.326 [INFO][3655] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.328 [INFO][3655] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120 Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.335 [INFO][3655] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.343 [INFO][3655] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.193/26] block=192.168.96.192/26 handle="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.343 [INFO][3655] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.193/26] handle="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.343 [INFO][3655] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.391336 containerd[1473]: 2026-03-07 00:54:24.343 [INFO][3655] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.193/26] IPv6=[] ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" HandleID="k8s-pod-network.b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Workload="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.392527 containerd[1473]: 2026-03-07 00:54:24.348 [INFO][3643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"127a57c4-5bc8-40d4-8acc-8570b2710eaf", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"csi-node-driver-8p8nh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice2856210e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:24.392527 containerd[1473]: 2026-03-07 00:54:24.349 [INFO][3643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.193/32] ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.392527 containerd[1473]: 2026-03-07 00:54:24.349 [INFO][3643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice2856210e8 ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.392527 containerd[1473]: 2026-03-07 00:54:24.366 [INFO][3643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.392527 containerd[1473]: 2026-03-07 00:54:24.366 [INFO][3643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"127a57c4-5bc8-40d4-8acc-8570b2710eaf", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120", Pod:"csi-node-driver-8p8nh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.96.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calice2856210e8", MAC:"b2:5c:30:bb:c6:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:24.392527 containerd[1473]: 2026-03-07 00:54:24.381 [INFO][3643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120" Namespace="calico-system" Pod="csi-node-driver-8p8nh" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-csi--node--driver--8p8nh-eth0" Mar 7 00:54:24.410809 kubelet[2572]: I0307 00:54:24.410583 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:24.413957 containerd[1473]: time="2026-03-07T00:54:24.412010614Z" level=info msg="StopPodSandbox for \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\"" Mar 7 00:54:24.413957 containerd[1473]: time="2026-03-07T00:54:24.412241665Z" level=info msg="Ensure that sandbox 440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1 in task-service has been cleanup successfully" Mar 7 00:54:24.415785 kubelet[2572]: I0307 00:54:24.415475 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:24.417130 containerd[1473]: time="2026-03-07T00:54:24.417097258Z" level=info msg="StopPodSandbox for \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\"" Mar 7 00:54:24.417748 containerd[1473]: time="2026-03-07T00:54:24.417675485Z" level=info msg="Ensure that sandbox 2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90 in task-service has been cleanup successfully" Mar 7 00:54:24.421519 kubelet[2572]: I0307 00:54:24.421325 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:24.424608 containerd[1473]: time="2026-03-07T00:54:24.423658332Z" level=info msg="StopPodSandbox for \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\"" Mar 7 00:54:24.427330 containerd[1473]: time="2026-03-07T00:54:24.426227935Z" level=info msg="Ensure that sandbox 9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5 in task-service has been cleanup successfully" Mar 7 00:54:24.449430 containerd[1473]: time="2026-03-07T00:54:24.449303001Z" level=info msg="StopPodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\"" Mar 7 00:54:24.468066 kubelet[2572]: I0307 00:54:24.467973 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-27n9r" podStartSLOduration=3.571354131 podStartE2EDuration="18.467952975s" podCreationTimestamp="2026-03-07 00:54:06 +0000 UTC" firstStartedPulling="2026-03-07 00:54:06.909315167 +0000 UTC m=+22.915854808" lastFinishedPulling="2026-03-07 00:54:21.805914011 +0000 UTC m=+37.812453652" observedRunningTime="2026-03-07 00:54:24.4663985 +0000 UTC m=+40.472938141" watchObservedRunningTime="2026-03-07 00:54:24.467952975 +0000 UTC m=+40.474492616" Mar 7 00:54:24.483241 containerd[1473]: time="2026-03-07T00:54:24.463365555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:24.483241 containerd[1473]: time="2026-03-07T00:54:24.464103550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:24.483241 containerd[1473]: time="2026-03-07T00:54:24.464119471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:24.483241 containerd[1473]: time="2026-03-07T00:54:24.465779510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:24.518267 systemd[1]: Started cri-containerd-b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120.scope - libcontainer container b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120. Mar 7 00:54:24.632168 containerd[1473]: time="2026-03-07T00:54:24.631352125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8p8nh,Uid:127a57c4-5bc8-40d4-8acc-8570b2710eaf,Namespace:calico-system,Attempt:0,} returns sandbox id \"b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120\"" Mar 7 00:54:24.637587 containerd[1473]: time="2026-03-07T00:54:24.637507700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 00:54:24.648087 containerd[1473]: time="2026-03-07T00:54:24.647286248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54959b7f66-h654f,Uid:3060f919-958a-4697-b760-500d72a51d7d,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:24.667975 containerd[1473]: time="2026-03-07T00:54:24.667915077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54959b7f66-fgbzm,Uid:db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.624 [INFO][3700] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.624 [INFO][3700] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" iface="eth0" netns="/var/run/netns/cni-5b74af43-16b7-dddc-99c9-4fc44d1d956d" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.624 [INFO][3700] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" iface="eth0" netns="/var/run/netns/cni-5b74af43-16b7-dddc-99c9-4fc44d1d956d" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.625 [INFO][3700] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" iface="eth0" netns="/var/run/netns/cni-5b74af43-16b7-dddc-99c9-4fc44d1d956d" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.625 [INFO][3700] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.625 [INFO][3700] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.728 [INFO][3792] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.735 [INFO][3792] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.735 [INFO][3792] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.763 [WARNING][3792] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.763 [INFO][3792] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.770 [INFO][3792] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.787156 containerd[1473]: 2026-03-07 00:54:24.777 [INFO][3700] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:24.788305 containerd[1473]: time="2026-03-07T00:54:24.788175240Z" level=info msg="TearDown network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\" successfully" Mar 7 00:54:24.788305 containerd[1473]: time="2026-03-07T00:54:24.788223282Z" level=info msg="StopPodSandbox for \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\" returns successfully" Mar 7 00:54:24.789210 containerd[1473]: time="2026-03-07T00:54:24.789161967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7gff,Uid:ea70a10d-7ce3-4275-9b2c-469af8e4ffc8,Namespace:kube-system,Attempt:1,}" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.689 [INFO][3728] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.691 [INFO][3728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" iface="eth0" netns="/var/run/netns/cni-1b11e615-4ce7-93b1-6468-d1943e77bd1e" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.692 [INFO][3728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" iface="eth0" netns="/var/run/netns/cni-1b11e615-4ce7-93b1-6468-d1943e77bd1e" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.692 [INFO][3728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" iface="eth0" netns="/var/run/netns/cni-1b11e615-4ce7-93b1-6468-d1943e77bd1e" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.692 [INFO][3728] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.692 [INFO][3728] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.792 [INFO][3809] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.792 [INFO][3809] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.792 [INFO][3809] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.821 [WARNING][3809] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.821 [INFO][3809] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.826 [INFO][3809] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.852054 containerd[1473]: 2026-03-07 00:54:24.835 [INFO][3728] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:24.852737 containerd[1473]: time="2026-03-07T00:54:24.852552485Z" level=info msg="TearDown network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\" successfully" Mar 7 00:54:24.852737 containerd[1473]: time="2026-03-07T00:54:24.852595047Z" level=info msg="StopPodSandbox for \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\" returns successfully" Mar 7 00:54:24.854853 containerd[1473]: time="2026-03-07T00:54:24.853418526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6gzpk,Uid:12a36780-8f9b-49b6-ae7a-05cc5d985d69,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.654 [INFO][3703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.654 [INFO][3703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" iface="eth0" netns="/var/run/netns/cni-4168f554-a696-0b5d-06a5-c2623fff0462" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.655 [INFO][3703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" iface="eth0" netns="/var/run/netns/cni-4168f554-a696-0b5d-06a5-c2623fff0462" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.655 [INFO][3703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" iface="eth0" netns="/var/run/netns/cni-4168f554-a696-0b5d-06a5-c2623fff0462" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.655 [INFO][3703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.656 [INFO][3703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.798 [INFO][3803] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.799 [INFO][3803] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.828 [INFO][3803] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.872 [WARNING][3803] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.872 [INFO][3803] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.879 [INFO][3803] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.914737 containerd[1473]: 2026-03-07 00:54:24.892 [INFO][3703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:24.915136 containerd[1473]: time="2026-03-07T00:54:24.914902793Z" level=info msg="TearDown network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\" successfully" Mar 7 00:54:24.915136 containerd[1473]: time="2026-03-07T00:54:24.914941395Z" level=info msg="StopPodSandbox for \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\" returns successfully" Mar 7 00:54:24.922429 containerd[1473]: time="2026-03-07T00:54:24.922381431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559d85bfb5-ppgz2,Uid:09e69c3c-3f2b-493c-a453-9280256b7c38,Namespace:calico-system,Attempt:1,}" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.702 [INFO][3748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.702 [INFO][3748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" iface="eth0" netns="/var/run/netns/cni-a0616928-7aba-bdff-ad31-231cbdf62194" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.703 [INFO][3748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" iface="eth0" netns="/var/run/netns/cni-a0616928-7aba-bdff-ad31-231cbdf62194" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.704 [INFO][3748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" iface="eth0" netns="/var/run/netns/cni-a0616928-7aba-bdff-ad31-231cbdf62194" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.704 [INFO][3748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.704 [INFO][3748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.863 [INFO][3824] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.866 [INFO][3824] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.879 [INFO][3824] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.906 [WARNING][3824] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.906 [INFO][3824] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.909 [INFO][3824] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:24.924099 containerd[1473]: 2026-03-07 00:54:24.918 [INFO][3748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:24.925194 containerd[1473]: time="2026-03-07T00:54:24.925159164Z" level=info msg="TearDown network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" successfully" Mar 7 00:54:24.925328 containerd[1473]: time="2026-03-07T00:54:24.925307411Z" level=info msg="StopPodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" returns successfully" Mar 7 00:54:24.963330 systemd[1]: run-netns-cni\x2d1b11e615\x2d4ce7\x2d93b1\x2d6468\x2dd1943e77bd1e.mount: Deactivated successfully. Mar 7 00:54:24.963455 systemd[1]: run-netns-cni\x2d4168f554\x2da696\x2d0b5d\x2d06a5\x2dc2623fff0462.mount: Deactivated successfully. Mar 7 00:54:24.963507 systemd[1]: run-netns-cni\x2da0616928\x2d7aba\x2dbdff\x2dad31\x2d231cbdf62194.mount: Deactivated successfully. Mar 7 00:54:24.963607 systemd[1]: run-netns-cni\x2d5b74af43\x2d16b7\x2ddddc\x2d99c9\x2d4fc44d1d956d.mount: Deactivated successfully. Mar 7 00:54:25.045542 kubelet[2572]: I0307 00:54:25.044423 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vk94\" (UniqueName: \"kubernetes.io/projected/2aea807c-a68a-4974-811d-20e6d9c78bc0-kube-api-access-8vk94\") pod \"2aea807c-a68a-4974-811d-20e6d9c78bc0\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " Mar 7 00:54:25.046602 kubelet[2572]: I0307 00:54:25.045794 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-backend-key-pair\") pod \"2aea807c-a68a-4974-811d-20e6d9c78bc0\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " Mar 7 00:54:25.046602 kubelet[2572]: I0307 00:54:25.045862 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-nginx-config\") pod \"2aea807c-a68a-4974-811d-20e6d9c78bc0\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " Mar 7 00:54:25.046602 kubelet[2572]: I0307 00:54:25.045885 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-ca-bundle\") pod \"2aea807c-a68a-4974-811d-20e6d9c78bc0\" (UID: \"2aea807c-a68a-4974-811d-20e6d9c78bc0\") " Mar 7 00:54:25.048023 kubelet[2572]: I0307 00:54:25.047734 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "2aea807c-a68a-4974-811d-20e6d9c78bc0" (UID: "2aea807c-a68a-4974-811d-20e6d9c78bc0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:54:25.048023 kubelet[2572]: I0307 00:54:25.047988 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2aea807c-a68a-4974-811d-20e6d9c78bc0" (UID: "2aea807c-a68a-4974-811d-20e6d9c78bc0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:54:25.055824 systemd[1]: var-lib-kubelet-pods-2aea807c\x2da68a\x2d4974\x2d811d\x2d20e6d9c78bc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8vk94.mount: Deactivated successfully. Mar 7 00:54:25.078220 systemd[1]: var-lib-kubelet-pods-2aea807c\x2da68a\x2d4974\x2d811d\x2d20e6d9c78bc0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 00:54:25.083314 kubelet[2572]: I0307 00:54:25.082993 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2aea807c-a68a-4974-811d-20e6d9c78bc0" (UID: "2aea807c-a68a-4974-811d-20e6d9c78bc0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:54:25.083835 kubelet[2572]: I0307 00:54:25.083758 2572 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aea807c-a68a-4974-811d-20e6d9c78bc0-kube-api-access-8vk94" (OuterVolumeSpecName: "kube-api-access-8vk94") pod "2aea807c-a68a-4974-811d-20e6d9c78bc0" (UID: "2aea807c-a68a-4974-811d-20e6d9c78bc0"). InnerVolumeSpecName "kube-api-access-8vk94". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:54:25.147464 kubelet[2572]: I0307 00:54:25.147289 2572 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-ca-bundle\") on node \"ci-4081-3-6-n-2a659a64a8\" DevicePath \"\"" Mar 7 00:54:25.148580 kubelet[2572]: I0307 00:54:25.148406 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8vk94\" (UniqueName: \"kubernetes.io/projected/2aea807c-a68a-4974-811d-20e6d9c78bc0-kube-api-access-8vk94\") on node \"ci-4081-3-6-n-2a659a64a8\" DevicePath \"\"" Mar 7 00:54:25.148580 kubelet[2572]: I0307 00:54:25.148447 2572 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2aea807c-a68a-4974-811d-20e6d9c78bc0-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-2a659a64a8\" DevicePath \"\"" Mar 7 00:54:25.148580 kubelet[2572]: I0307 00:54:25.148459 2572 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2aea807c-a68a-4974-811d-20e6d9c78bc0-nginx-config\") on node \"ci-4081-3-6-n-2a659a64a8\" DevicePath \"\"" Mar 7 00:54:25.204550 systemd-networkd[1365]: cali75e546826f0: Link UP Mar 7 00:54:25.215246 systemd-networkd[1365]: cali75e546826f0: Gained carrier Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:24.819 [ERROR][3828] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:24.874 [INFO][3828] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0 calico-apiserver-54959b7f66- calico-system db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf 843 0 2026-03-07 00:54:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54959b7f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 calico-apiserver-54959b7f66-fgbzm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali75e546826f0 [] [] }} ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:24.878 [INFO][3828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.036 [INFO][3862] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" HandleID="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.081 [INFO][3862] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" HandleID="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cb1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"calico-apiserver-54959b7f66-fgbzm", "timestamp":"2026-03-07 00:54:25.036408648 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002fe2c0)} Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.081 [INFO][3862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.081 [INFO][3862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.081 [INFO][3862] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.099 [INFO][3862] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.128 [INFO][3862] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.140 [INFO][3862] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.144 [INFO][3862] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.150 [INFO][3862] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.151 [INFO][3862] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.154 [INFO][3862] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854 Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.166 [INFO][3862] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.180 [INFO][3862] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.194/26] block=192.168.96.192/26 handle="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.180 [INFO][3862] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.194/26] handle="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.180 [INFO][3862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.262930 containerd[1473]: 2026-03-07 00:54:25.180 [INFO][3862] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.194/26] IPv6=[] ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" HandleID="k8s-pod-network.8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.263913 containerd[1473]: 2026-03-07 00:54:25.189 [INFO][3828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0", GenerateName:"calico-apiserver-54959b7f66-", Namespace:"calico-system", SelfLink:"", UID:"db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54959b7f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"calico-apiserver-54959b7f66-fgbzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali75e546826f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.263913 containerd[1473]: 2026-03-07 00:54:25.190 [INFO][3828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.194/32] ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.263913 containerd[1473]: 2026-03-07 00:54:25.190 [INFO][3828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75e546826f0 ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.263913 containerd[1473]: 2026-03-07 00:54:25.214 [INFO][3828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.263913 containerd[1473]: 2026-03-07 00:54:25.222 [INFO][3828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0", GenerateName:"calico-apiserver-54959b7f66-", Namespace:"calico-system", SelfLink:"", UID:"db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54959b7f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854", Pod:"calico-apiserver-54959b7f66-fgbzm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali75e546826f0", MAC:"0e:c5:91:5a:30:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.263913 containerd[1473]: 2026-03-07 00:54:25.256 [INFO][3828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-fgbzm" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--fgbzm-eth0" Mar 7 00:54:25.327796 systemd-networkd[1365]: cali930e50ece2d: Link UP Mar 7 00:54:25.334140 systemd-networkd[1365]: cali930e50ece2d: Gained carrier Mar 7 00:54:25.362278 containerd[1473]: time="2026-03-07T00:54:25.361590437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.362278 containerd[1473]: time="2026-03-07T00:54:25.361659000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.362278 containerd[1473]: time="2026-03-07T00:54:25.361683921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.362278 containerd[1473]: time="2026-03-07T00:54:25.361813247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:24.830 [ERROR][3813] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:24.896 [INFO][3813] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0 calico-apiserver-54959b7f66- calico-system 3060f919-958a-4697-b760-500d72a51d7d 841 0 2026-03-07 00:54:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54959b7f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 calico-apiserver-54959b7f66-h654f eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali930e50ece2d [] [] }} ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:24.896 [INFO][3813] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.099 [INFO][3870] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" HandleID="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.140 [INFO][3870] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" HandleID="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003628a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"calico-apiserver-54959b7f66-h654f", "timestamp":"2026-03-07 00:54:25.099470546 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d49a0)} Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.140 [INFO][3870] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.183 [INFO][3870] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.183 [INFO][3870] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.197 [INFO][3870] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.256 [INFO][3870] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.274 [INFO][3870] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.281 [INFO][3870] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.285 [INFO][3870] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.285 [INFO][3870] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.290 [INFO][3870] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7 Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.298 [INFO][3870] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.309 [INFO][3870] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.195/26] block=192.168.96.192/26 handle="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.309 [INFO][3870] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.195/26] handle="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.309 [INFO][3870] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.374323 containerd[1473]: 2026-03-07 00:54:25.309 [INFO][3870] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.195/26] IPv6=[] ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" HandleID="k8s-pod-network.695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.374966 containerd[1473]: 2026-03-07 00:54:25.321 [INFO][3813] cni-plugin/k8s.go 418: Populated endpoint ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0", GenerateName:"calico-apiserver-54959b7f66-", Namespace:"calico-system", SelfLink:"", UID:"3060f919-958a-4697-b760-500d72a51d7d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54959b7f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"calico-apiserver-54959b7f66-h654f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali930e50ece2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.374966 containerd[1473]: 2026-03-07 00:54:25.321 [INFO][3813] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.195/32] ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.374966 containerd[1473]: 2026-03-07 00:54:25.321 [INFO][3813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali930e50ece2d ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.374966 containerd[1473]: 2026-03-07 00:54:25.341 [INFO][3813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.374966 containerd[1473]: 2026-03-07 00:54:25.344 [INFO][3813] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0", GenerateName:"calico-apiserver-54959b7f66-", Namespace:"calico-system", SelfLink:"", UID:"3060f919-958a-4697-b760-500d72a51d7d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54959b7f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7", Pod:"calico-apiserver-54959b7f66-h654f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.96.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali930e50ece2d", MAC:"7a:40:70:f5:75:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.374966 containerd[1473]: 2026-03-07 00:54:25.365 [INFO][3813] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7" Namespace="calico-system" Pod="calico-apiserver-54959b7f66-h654f" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--apiserver--54959b7f66--h654f-eth0" Mar 7 00:54:25.465195 systemd-networkd[1365]: cali373f140d0be: Link UP Mar 7 00:54:25.470406 systemd-networkd[1365]: cali373f140d0be: Gained carrier Mar 7 00:54:25.477801 systemd[1]: Started cri-containerd-8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854.scope - libcontainer container 8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854. Mar 7 00:54:25.487494 systemd[1]: Removed slice kubepods-besteffort-pod2aea807c_a68a_4974_811d_20e6d9c78bc0.slice - libcontainer container kubepods-besteffort-pod2aea807c_a68a_4974_811d_20e6d9c78bc0.slice. Mar 7 00:54:25.503964 containerd[1473]: time="2026-03-07T00:54:25.503669255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.503964 containerd[1473]: time="2026-03-07T00:54:25.503740699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.503964 containerd[1473]: time="2026-03-07T00:54:25.503786941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.504344 containerd[1473]: time="2026-03-07T00:54:25.503909466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:24.933 [ERROR][3847] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.037 [INFO][3847] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0 coredns-674b8bbfcf- kube-system ea70a10d-7ce3-4275-9b2c-469af8e4ffc8 889 0 2026-03-07 00:53:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 coredns-674b8bbfcf-q7gff eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali373f140d0be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.037 [INFO][3847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.219 [INFO][3926] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" HandleID="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.265 [INFO][3926] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" HandleID="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000388e00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"coredns-674b8bbfcf-q7gff", "timestamp":"2026-03-07 00:54:25.219969279 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000bac60)} Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.265 [INFO][3926] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.309 [INFO][3926] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.310 [INFO][3926] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.319 [INFO][3926] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.338 [INFO][3926] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.381 [INFO][3926] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.386 [INFO][3926] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.391 [INFO][3926] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.391 [INFO][3926] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.398 [INFO][3926] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911 Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.409 [INFO][3926] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.431 [INFO][3926] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.196/26] block=192.168.96.192/26 handle="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.431 [INFO][3926] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.196/26] handle="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.431 [INFO][3926] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.560446 containerd[1473]: 2026-03-07 00:54:25.431 [INFO][3926] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.196/26] IPv6=[] ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" HandleID="k8s-pod-network.49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.561757 containerd[1473]: 2026-03-07 00:54:25.458 [INFO][3847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea70a10d-7ce3-4275-9b2c-469af8e4ffc8", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"coredns-674b8bbfcf-q7gff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373f140d0be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.561757 containerd[1473]: 2026-03-07 00:54:25.458 [INFO][3847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.196/32] ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.561757 containerd[1473]: 2026-03-07 00:54:25.458 [INFO][3847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali373f140d0be ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.561757 containerd[1473]: 2026-03-07 00:54:25.474 [INFO][3847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.561757 containerd[1473]: 2026-03-07 00:54:25.497 [INFO][3847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea70a10d-7ce3-4275-9b2c-469af8e4ffc8", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911", Pod:"coredns-674b8bbfcf-q7gff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373f140d0be", MAC:"ee:35:8e:86:89:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.561757 containerd[1473]: 2026-03-07 00:54:25.550 [INFO][3847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911" Namespace="kube-system" Pod="coredns-674b8bbfcf-q7gff" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:25.634189 containerd[1473]: time="2026-03-07T00:54:25.629322349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.636201 systemd-networkd[1365]: caliec156762035: Link UP Mar 7 00:54:25.637593 systemd-networkd[1365]: caliec156762035: Gained carrier Mar 7 00:54:25.643627 containerd[1473]: time="2026-03-07T00:54:25.640391304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.643627 containerd[1473]: time="2026-03-07T00:54:25.640430986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.644097 containerd[1473]: time="2026-03-07T00:54:25.643566532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.066 [ERROR][3864] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.140 [INFO][3864] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0 goldmane-5b85766d88- calico-system 12a36780-8f9b-49b6-ae7a-05cc5d985d69 892 0 2026-03-07 00:54:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 goldmane-5b85766d88-6gzpk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliec156762035 [] [] }} ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.141 [INFO][3864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.235 [INFO][3953] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" HandleID="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.269 [INFO][3953] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" HandleID="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027ddc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"goldmane-5b85766d88-6gzpk", "timestamp":"2026-03-07 00:54:25.235266832 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004e74a0)} Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.270 [INFO][3953] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.432 [INFO][3953] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.434 [INFO][3953] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.440 [INFO][3953] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.480 [INFO][3953] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.514 [INFO][3953] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.521 [INFO][3953] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.538 [INFO][3953] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.547 [INFO][3953] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.565 [INFO][3953] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.579 [INFO][3953] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.597 [INFO][3953] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.197/26] block=192.168.96.192/26 handle="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.597 [INFO][3953] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.197/26] handle="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.598 [INFO][3953] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.701601 containerd[1473]: 2026-03-07 00:54:25.598 [INFO][3953] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.197/26] IPv6=[] ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" HandleID="k8s-pod-network.5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.702330 containerd[1473]: 2026-03-07 00:54:25.610 [INFO][3864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"12a36780-8f9b-49b6-ae7a-05cc5d985d69", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"goldmane-5b85766d88-6gzpk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec156762035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.702330 containerd[1473]: 2026-03-07 00:54:25.610 [INFO][3864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.197/32] ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.702330 containerd[1473]: 2026-03-07 00:54:25.610 [INFO][3864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec156762035 ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.702330 containerd[1473]: 2026-03-07 00:54:25.638 [INFO][3864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.702330 containerd[1473]: 2026-03-07 00:54:25.640 [INFO][3864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"12a36780-8f9b-49b6-ae7a-05cc5d985d69", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f", Pod:"goldmane-5b85766d88-6gzpk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec156762035", MAC:"92:c1:41:93:b0:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.702330 containerd[1473]: 2026-03-07 00:54:25.697 [INFO][3864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f" Namespace="calico-system" Pod="goldmane-5b85766d88-6gzpk" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:25.745813 containerd[1473]: time="2026-03-07T00:54:25.745610366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:25.745813 containerd[1473]: time="2026-03-07T00:54:25.745687969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:25.745813 containerd[1473]: time="2026-03-07T00:54:25.745705090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.748174 containerd[1473]: time="2026-03-07T00:54:25.746127670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:25.758380 kubelet[2572]: I0307 00:54:25.758328 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f6e6480-f744-4e70-be69-a9ff1a0d7705-whisker-backend-key-pair\") pod \"whisker-7688855859-jw2d4\" (UID: \"5f6e6480-f744-4e70-be69-a9ff1a0d7705\") " pod="calico-system/whisker-7688855859-jw2d4" Mar 7 00:54:25.758380 kubelet[2572]: I0307 00:54:25.758382 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5f6e6480-f744-4e70-be69-a9ff1a0d7705-nginx-config\") pod \"whisker-7688855859-jw2d4\" (UID: \"5f6e6480-f744-4e70-be69-a9ff1a0d7705\") " pod="calico-system/whisker-7688855859-jw2d4" Mar 7 00:54:25.758623 kubelet[2572]: I0307 00:54:25.758410 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86g9p\" (UniqueName: \"kubernetes.io/projected/5f6e6480-f744-4e70-be69-a9ff1a0d7705-kube-api-access-86g9p\") pod \"whisker-7688855859-jw2d4\" (UID: \"5f6e6480-f744-4e70-be69-a9ff1a0d7705\") " pod="calico-system/whisker-7688855859-jw2d4" Mar 7 00:54:25.758623 kubelet[2572]: I0307 00:54:25.758429 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6e6480-f744-4e70-be69-a9ff1a0d7705-whisker-ca-bundle\") pod \"whisker-7688855859-jw2d4\" (UID: \"5f6e6480-f744-4e70-be69-a9ff1a0d7705\") " pod="calico-system/whisker-7688855859-jw2d4" Mar 7 00:54:25.823403 systemd[1]: Started cri-containerd-695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7.scope - libcontainer container 695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7. Mar 7 00:54:25.831114 systemd-networkd[1365]: calid745776d435: Link UP Mar 7 00:54:25.835158 systemd-networkd[1365]: calid745776d435: Gained carrier Mar 7 00:54:25.840881 systemd[1]: Started cri-containerd-49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911.scope - libcontainer container 49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911. Mar 7 00:54:25.845307 systemd[1]: Started cri-containerd-5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f.scope - libcontainer container 5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f. Mar 7 00:54:25.858898 systemd[1]: Created slice kubepods-besteffort-pod5f6e6480_f744_4e70_be69_a9ff1a0d7705.slice - libcontainer container kubepods-besteffort-pod5f6e6480_f744_4e70_be69_a9ff1a0d7705.slice. Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.213 [ERROR][3938] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.268 [INFO][3938] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0 calico-kube-controllers-559d85bfb5- calico-system 09e69c3c-3f2b-493c-a453-9280256b7c38 890 0 2026-03-07 00:54:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:559d85bfb5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 calico-kube-controllers-559d85bfb5-ppgz2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid745776d435 [] [] }} ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.269 [INFO][3938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.412 [INFO][3981] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" HandleID="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.493 [INFO][3981] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" HandleID="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fb580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"calico-kube-controllers-559d85bfb5-ppgz2", "timestamp":"2026-03-07 00:54:25.412358722 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002bcdc0)} Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.494 [INFO][3981] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.598 [INFO][3981] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.598 [INFO][3981] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.623 [INFO][3981] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.733 [INFO][3981] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.760 [INFO][3981] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.768 [INFO][3981] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.783 [INFO][3981] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.783 [INFO][3981] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.797 [INFO][3981] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.804 [INFO][3981] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.815 [INFO][3981] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.198/26] block=192.168.96.192/26 handle="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.815 [INFO][3981] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.198/26] handle="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.815 [INFO][3981] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:25.936126 containerd[1473]: 2026-03-07 00:54:25.815 [INFO][3981] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.198/26] IPv6=[] ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" HandleID="k8s-pod-network.98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.936845 containerd[1473]: 2026-03-07 00:54:25.820 [INFO][3938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0", GenerateName:"calico-kube-controllers-559d85bfb5-", Namespace:"calico-system", SelfLink:"", UID:"09e69c3c-3f2b-493c-a453-9280256b7c38", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559d85bfb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"calico-kube-controllers-559d85bfb5-ppgz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid745776d435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.936845 containerd[1473]: 2026-03-07 00:54:25.820 [INFO][3938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.198/32] ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.936845 containerd[1473]: 2026-03-07 00:54:25.821 [INFO][3938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid745776d435 ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.936845 containerd[1473]: 2026-03-07 00:54:25.835 [INFO][3938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.936845 containerd[1473]: 2026-03-07 00:54:25.838 [INFO][3938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0", GenerateName:"calico-kube-controllers-559d85bfb5-", Namespace:"calico-system", SelfLink:"", UID:"09e69c3c-3f2b-493c-a453-9280256b7c38", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559d85bfb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae", Pod:"calico-kube-controllers-559d85bfb5-ppgz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid745776d435", MAC:"7a:3a:b8:16:89:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:25.936845 containerd[1473]: 2026-03-07 00:54:25.913 [INFO][3938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae" Namespace="calico-system" Pod="calico-kube-controllers-559d85bfb5-ppgz2" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:25.960497 systemd-networkd[1365]: calice2856210e8: Gained IPv6LL Mar 7 00:54:26.022785 containerd[1473]: time="2026-03-07T00:54:26.021960772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:26.031622 containerd[1473]: time="2026-03-07T00:54:26.031140708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:26.031622 containerd[1473]: time="2026-03-07T00:54:26.031180190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.034366 containerd[1473]: time="2026-03-07T00:54:26.032152674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.061809 containerd[1473]: time="2026-03-07T00:54:26.061760535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q7gff,Uid:ea70a10d-7ce3-4275-9b2c-469af8e4ffc8,Namespace:kube-system,Attempt:1,} returns sandbox id \"49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911\"" Mar 7 00:54:26.089167 containerd[1473]: time="2026-03-07T00:54:26.089121534Z" level=info msg="CreateContainer within sandbox \"49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:26.094633 systemd[1]: Started cri-containerd-98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae.scope - libcontainer container 98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae. Mar 7 00:54:26.147815 containerd[1473]: time="2026-03-07T00:54:26.147755270Z" level=info msg="CreateContainer within sandbox \"49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"36eab119b22f7218fee5100c9e561a2b7c627950d79fa1e1ce853ea22606d14c\"" Mar 7 00:54:26.149537 containerd[1473]: time="2026-03-07T00:54:26.149498828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-6gzpk,Uid:12a36780-8f9b-49b6-ae7a-05cc5d985d69,Namespace:calico-system,Attempt:1,} returns sandbox id \"5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f\"" Mar 7 00:54:26.150769 containerd[1473]: time="2026-03-07T00:54:26.150729724Z" level=info msg="StartContainer for \"36eab119b22f7218fee5100c9e561a2b7c627950d79fa1e1ce853ea22606d14c\"" Mar 7 00:54:26.154465 containerd[1473]: time="2026-03-07T00:54:26.152874701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54959b7f66-fgbzm,Uid:db1c0333-fb89-4a9a-b9e2-a32a5d4a41cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854\"" Mar 7 00:54:26.155820 kubelet[2572]: I0307 00:54:26.155771 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aea807c-a68a-4974-811d-20e6d9c78bc0" path="/var/lib/kubelet/pods/2aea807c-a68a-4974-811d-20e6d9c78bc0/volumes" Mar 7 00:54:26.198804 containerd[1473]: time="2026-03-07T00:54:26.198762740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54959b7f66-h654f,Uid:3060f919-958a-4697-b760-500d72a51d7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7\"" Mar 7 00:54:26.221244 containerd[1473]: time="2026-03-07T00:54:26.220964105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-559d85bfb5-ppgz2,Uid:09e69c3c-3f2b-493c-a453-9280256b7c38,Namespace:calico-system,Attempt:1,} returns sandbox id \"98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae\"" Mar 7 00:54:26.237704 systemd[1]: Started cri-containerd-36eab119b22f7218fee5100c9e561a2b7c627950d79fa1e1ce853ea22606d14c.scope - libcontainer container 36eab119b22f7218fee5100c9e561a2b7c627950d79fa1e1ce853ea22606d14c. Mar 7 00:54:26.276982 containerd[1473]: time="2026-03-07T00:54:26.276792153Z" level=info msg="StartContainer for \"36eab119b22f7218fee5100c9e561a2b7c627950d79fa1e1ce853ea22606d14c\" returns successfully" Mar 7 00:54:26.280356 containerd[1473]: time="2026-03-07T00:54:26.279512277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7688855859-jw2d4,Uid:5f6e6480-f744-4e70-be69-a9ff1a0d7705,Namespace:calico-system,Attempt:0,}" Mar 7 00:54:26.494039 kubelet[2572]: I0307 00:54:26.493912 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q7gff" podStartSLOduration=36.493893386 podStartE2EDuration="36.493893386s" podCreationTimestamp="2026-03-07 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:26.491463116 +0000 UTC m=+42.498002797" watchObservedRunningTime="2026-03-07 00:54:26.493893386 +0000 UTC m=+42.500433027" Mar 7 00:54:26.531621 systemd-networkd[1365]: calie2721ba8b7c: Link UP Mar 7 00:54:26.532492 systemd-networkd[1365]: calie2721ba8b7c: Gained carrier Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.333 [ERROR][4319] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.353 [INFO][4319] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0 whisker-7688855859- calico-system 5f6e6480-f744-4e70-be69-a9ff1a0d7705 921 0 2026-03-07 00:54:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7688855859 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 whisker-7688855859-jw2d4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2721ba8b7c [] [] }} ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.353 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.419 [INFO][4333] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" HandleID="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.441 [INFO][4333] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" HandleID="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbe90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"whisker-7688855859-jw2d4", "timestamp":"2026-03-07 00:54:26.419510057 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000ac580)} Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.441 [INFO][4333] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.442 [INFO][4333] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.442 [INFO][4333] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.446 [INFO][4333] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.458 [INFO][4333] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.480 [INFO][4333] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.485 [INFO][4333] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.490 [INFO][4333] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.490 [INFO][4333] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.493 [INFO][4333] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.505 [INFO][4333] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.515 [INFO][4333] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.199/26] block=192.168.96.192/26 handle="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.516 [INFO][4333] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.199/26] handle="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.516 [INFO][4333] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:26.558174 containerd[1473]: 2026-03-07 00:54:26.516 [INFO][4333] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.199/26] IPv6=[] ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" HandleID="k8s-pod-network.16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.561137 containerd[1473]: 2026-03-07 00:54:26.521 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0", GenerateName:"whisker-7688855859-", Namespace:"calico-system", SelfLink:"", UID:"5f6e6480-f744-4e70-be69-a9ff1a0d7705", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7688855859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"whisker-7688855859-jw2d4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2721ba8b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:26.561137 containerd[1473]: 2026-03-07 00:54:26.521 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.199/32] ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.561137 containerd[1473]: 2026-03-07 00:54:26.521 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2721ba8b7c ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.561137 containerd[1473]: 2026-03-07 00:54:26.534 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.561137 containerd[1473]: 2026-03-07 00:54:26.535 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0", GenerateName:"whisker-7688855859-", Namespace:"calico-system", SelfLink:"", UID:"5f6e6480-f744-4e70-be69-a9ff1a0d7705", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7688855859", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc", Pod:"whisker-7688855859-jw2d4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.96.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2721ba8b7c", MAC:"36:06:88:60:be:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:26.561137 containerd[1473]: 2026-03-07 00:54:26.554 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc" Namespace="calico-system" Pod="whisker-7688855859-jw2d4" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--7688855859--jw2d4-eth0" Mar 7 00:54:26.605921 containerd[1473]: time="2026-03-07T00:54:26.598234351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:26.605921 containerd[1473]: time="2026-03-07T00:54:26.602289055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:26.605921 containerd[1473]: time="2026-03-07T00:54:26.602309496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.605921 containerd[1473]: time="2026-03-07T00:54:26.602459903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:26.649483 systemd[1]: Started cri-containerd-16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc.scope - libcontainer container 16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc. Mar 7 00:54:26.663950 systemd-networkd[1365]: caliec156762035: Gained IPv6LL Mar 7 00:54:26.664250 systemd-networkd[1365]: cali75e546826f0: Gained IPv6LL Mar 7 00:54:26.741713 containerd[1473]: time="2026-03-07T00:54:26.741661047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7688855859-jw2d4,Uid:5f6e6480-f744-4e70-be69-a9ff1a0d7705,Namespace:calico-system,Attempt:0,} returns sandbox id \"16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc\"" Mar 7 00:54:26.846080 kernel: calico-node[4013]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 00:54:26.854070 systemd-networkd[1365]: cali373f140d0be: Gained IPv6LL Mar 7 00:54:26.854896 systemd-networkd[1365]: cali930e50ece2d: Gained IPv6LL Mar 7 00:54:27.045260 systemd-networkd[1365]: calid745776d435: Gained IPv6LL Mar 7 00:54:27.296217 systemd-networkd[1365]: vxlan.calico: Link UP Mar 7 00:54:27.296226 systemd-networkd[1365]: vxlan.calico: Gained carrier Mar 7 00:54:27.750246 systemd-networkd[1365]: calie2721ba8b7c: Gained IPv6LL Mar 7 00:54:28.167792 containerd[1473]: time="2026-03-07T00:54:28.167703485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.169152 containerd[1473]: time="2026-03-07T00:54:28.169101225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 00:54:28.172087 containerd[1473]: time="2026-03-07T00:54:28.170671492Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.174176 containerd[1473]: time="2026-03-07T00:54:28.174133481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:28.175241 containerd[1473]: time="2026-03-07T00:54:28.175188366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.537600342s" Mar 7 00:54:28.175385 containerd[1473]: time="2026-03-07T00:54:28.175368814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 00:54:28.176553 containerd[1473]: time="2026-03-07T00:54:28.176525023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 00:54:28.181193 containerd[1473]: time="2026-03-07T00:54:28.181158662Z" level=info msg="CreateContainer within sandbox \"b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 00:54:28.203036 containerd[1473]: time="2026-03-07T00:54:28.202913873Z" level=info msg="CreateContainer within sandbox \"b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e1febc752e1d8a46bad0e8155c5e0060526f06e0cf729e210d88cb6c7532886e\"" Mar 7 00:54:28.204424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853180337.mount: Deactivated successfully. Mar 7 00:54:28.206153 containerd[1473]: time="2026-03-07T00:54:28.205938043Z" level=info msg="StartContainer for \"e1febc752e1d8a46bad0e8155c5e0060526f06e0cf729e210d88cb6c7532886e\"" Mar 7 00:54:28.248301 systemd[1]: Started cri-containerd-e1febc752e1d8a46bad0e8155c5e0060526f06e0cf729e210d88cb6c7532886e.scope - libcontainer container e1febc752e1d8a46bad0e8155c5e0060526f06e0cf729e210d88cb6c7532886e. Mar 7 00:54:28.279777 containerd[1473]: time="2026-03-07T00:54:28.279585796Z" level=info msg="StartContainer for \"e1febc752e1d8a46bad0e8155c5e0060526f06e0cf729e210d88cb6c7532886e\" returns successfully" Mar 7 00:54:28.965702 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Mar 7 00:54:30.493918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773752673.mount: Deactivated successfully. Mar 7 00:54:30.821516 containerd[1473]: time="2026-03-07T00:54:30.819876735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.821516 containerd[1473]: time="2026-03-07T00:54:30.821320193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 00:54:30.822601 containerd[1473]: time="2026-03-07T00:54:30.822510241Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.825693 containerd[1473]: time="2026-03-07T00:54:30.825631408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:30.826590 containerd[1473]: time="2026-03-07T00:54:30.826457121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 2.649549002s" Mar 7 00:54:30.826590 containerd[1473]: time="2026-03-07T00:54:30.826494603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 00:54:30.829402 containerd[1473]: time="2026-03-07T00:54:30.827930061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:54:30.835057 containerd[1473]: time="2026-03-07T00:54:30.835006028Z" level=info msg="CreateContainer within sandbox \"5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 00:54:30.858814 containerd[1473]: time="2026-03-07T00:54:30.858767430Z" level=info msg="CreateContainer within sandbox \"5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b7298f34893bcc604b1dd9b6121bd36ab7ff699bd519af7ca26e5f267f671c98\"" Mar 7 00:54:30.862200 containerd[1473]: time="2026-03-07T00:54:30.860235609Z" level=info msg="StartContainer for \"b7298f34893bcc604b1dd9b6121bd36ab7ff699bd519af7ca26e5f267f671c98\"" Mar 7 00:54:30.898296 systemd[1]: Started cri-containerd-b7298f34893bcc604b1dd9b6121bd36ab7ff699bd519af7ca26e5f267f671c98.scope - libcontainer container b7298f34893bcc604b1dd9b6121bd36ab7ff699bd519af7ca26e5f267f671c98. Mar 7 00:54:30.944019 containerd[1473]: time="2026-03-07T00:54:30.943968361Z" level=info msg="StartContainer for \"b7298f34893bcc604b1dd9b6121bd36ab7ff699bd519af7ca26e5f267f671c98\" returns successfully" Mar 7 00:54:31.620738 kubelet[2572]: I0307 00:54:31.620617 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-6gzpk" podStartSLOduration=21.954204794 podStartE2EDuration="26.620591681s" podCreationTimestamp="2026-03-07 00:54:05 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.161424369 +0000 UTC m=+42.167963970" lastFinishedPulling="2026-03-07 00:54:30.827811216 +0000 UTC m=+46.834350857" observedRunningTime="2026-03-07 00:54:31.522983115 +0000 UTC m=+47.529522756" watchObservedRunningTime="2026-03-07 00:54:31.620591681 +0000 UTC m=+47.627131362" Mar 7 00:54:33.878585 containerd[1473]: time="2026-03-07T00:54:33.877576841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:33.878585 containerd[1473]: time="2026-03-07T00:54:33.878543597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 00:54:33.879526 containerd[1473]: time="2026-03-07T00:54:33.879491072Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:33.882682 containerd[1473]: time="2026-03-07T00:54:33.882487384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:33.884135 containerd[1473]: time="2026-03-07T00:54:33.884096244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.056125381s" Mar 7 00:54:33.884592 containerd[1473]: time="2026-03-07T00:54:33.884260650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:54:33.886135 containerd[1473]: time="2026-03-07T00:54:33.885491096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:54:33.890764 containerd[1473]: time="2026-03-07T00:54:33.890713131Z" level=info msg="CreateContainer within sandbox \"8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:54:33.909829 containerd[1473]: time="2026-03-07T00:54:33.909777682Z" level=info msg="CreateContainer within sandbox \"8b412aab552782fccba4d84e529409c57b9ab9755fd372f5096ecfc55f0d0854\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5056bf21df38f6fd2a168d12da48085a146f0b02a53343776372d6a3b30b5a24\"" Mar 7 00:54:33.912463 containerd[1473]: time="2026-03-07T00:54:33.912390499Z" level=info msg="StartContainer for \"5056bf21df38f6fd2a168d12da48085a146f0b02a53343776372d6a3b30b5a24\"" Mar 7 00:54:33.955329 systemd[1]: Started cri-containerd-5056bf21df38f6fd2a168d12da48085a146f0b02a53343776372d6a3b30b5a24.scope - libcontainer container 5056bf21df38f6fd2a168d12da48085a146f0b02a53343776372d6a3b30b5a24. Mar 7 00:54:33.994663 containerd[1473]: time="2026-03-07T00:54:33.994610965Z" level=info msg="StartContainer for \"5056bf21df38f6fd2a168d12da48085a146f0b02a53343776372d6a3b30b5a24\" returns successfully" Mar 7 00:54:34.303362 containerd[1473]: time="2026-03-07T00:54:34.303230850Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:34.305630 containerd[1473]: time="2026-03-07T00:54:34.305450210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 00:54:34.307492 containerd[1473]: time="2026-03-07T00:54:34.307447803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 421.915665ms" Mar 7 00:54:34.307565 containerd[1473]: time="2026-03-07T00:54:34.307500285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:54:34.310376 containerd[1473]: time="2026-03-07T00:54:34.310337787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 00:54:34.315161 containerd[1473]: time="2026-03-07T00:54:34.315115761Z" level=info msg="CreateContainer within sandbox \"695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:54:34.337520 containerd[1473]: time="2026-03-07T00:54:34.337465652Z" level=info msg="CreateContainer within sandbox \"695d60d16de11d51475e64aad6ad87cfa45a86b8e5eeb549e350e36e2ef0a7e7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f6126d8d0c3f3bf0d92cc374dd8c88ffbb7838b16e22d9fad4c27f0db9016e22\"" Mar 7 00:54:34.340114 containerd[1473]: time="2026-03-07T00:54:34.340035225Z" level=info msg="StartContainer for \"f6126d8d0c3f3bf0d92cc374dd8c88ffbb7838b16e22d9fad4c27f0db9016e22\"" Mar 7 00:54:34.378978 systemd[1]: Started cri-containerd-f6126d8d0c3f3bf0d92cc374dd8c88ffbb7838b16e22d9fad4c27f0db9016e22.scope - libcontainer container f6126d8d0c3f3bf0d92cc374dd8c88ffbb7838b16e22d9fad4c27f0db9016e22. Mar 7 00:54:34.451755 containerd[1473]: time="2026-03-07T00:54:34.451631754Z" level=info msg="StartContainer for \"f6126d8d0c3f3bf0d92cc374dd8c88ffbb7838b16e22d9fad4c27f0db9016e22\" returns successfully" Mar 7 00:54:34.552544 kubelet[2572]: I0307 00:54:34.552180 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-54959b7f66-h654f" podStartSLOduration=22.445583643 podStartE2EDuration="30.552160802s" podCreationTimestamp="2026-03-07 00:54:04 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.201847759 +0000 UTC m=+42.208387360" lastFinishedPulling="2026-03-07 00:54:34.308424878 +0000 UTC m=+50.314964519" observedRunningTime="2026-03-07 00:54:34.551920353 +0000 UTC m=+50.558459954" watchObservedRunningTime="2026-03-07 00:54:34.552160802 +0000 UTC m=+50.558700443" Mar 7 00:54:34.552544 kubelet[2572]: I0307 00:54:34.552388 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-54959b7f66-fgbzm" podStartSLOduration=22.832189497999998 podStartE2EDuration="30.55238333s" podCreationTimestamp="2026-03-07 00:54:04 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.1652001 +0000 UTC m=+42.171739741" lastFinishedPulling="2026-03-07 00:54:33.885393972 +0000 UTC m=+49.891933573" observedRunningTime="2026-03-07 00:54:34.534324995 +0000 UTC m=+50.540864636" watchObservedRunningTime="2026-03-07 00:54:34.55238333 +0000 UTC m=+50.558922971" Mar 7 00:54:35.141696 containerd[1473]: time="2026-03-07T00:54:35.141295562Z" level=info msg="StopPodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\"" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.210 [INFO][4770] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.210 [INFO][4770] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" iface="eth0" netns="/var/run/netns/cni-f82a8aa9-02d7-c43c-3f65-8b091e3b1c1c" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.212 [INFO][4770] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" iface="eth0" netns="/var/run/netns/cni-f82a8aa9-02d7-c43c-3f65-8b091e3b1c1c" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.214 [INFO][4770] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" iface="eth0" netns="/var/run/netns/cni-f82a8aa9-02d7-c43c-3f65-8b091e3b1c1c" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.214 [INFO][4770] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.215 [INFO][4770] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.272 [INFO][4777] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.272 [INFO][4777] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.272 [INFO][4777] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.285 [WARNING][4777] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.285 [INFO][4777] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.288 [INFO][4777] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:35.295072 containerd[1473]: 2026-03-07 00:54:35.292 [INFO][4770] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:35.297719 containerd[1473]: time="2026-03-07T00:54:35.297148426Z" level=info msg="TearDown network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" successfully" Mar 7 00:54:35.297719 containerd[1473]: time="2026-03-07T00:54:35.297200908Z" level=info msg="StopPodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" returns successfully" Mar 7 00:54:35.298599 containerd[1473]: time="2026-03-07T00:54:35.298150821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjqpf,Uid:e871832b-09ba-4e7c-a062-f6bcd18f87c8,Namespace:kube-system,Attempt:1,}" Mar 7 00:54:35.305414 systemd[1]: run-netns-cni\x2df82a8aa9\x2d02d7\x2dc43c\x2d3f65\x2d8b091e3b1c1c.mount: Deactivated successfully. Mar 7 00:54:35.518540 kubelet[2572]: I0307 00:54:35.518389 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:35.518540 kubelet[2572]: I0307 00:54:35.518424 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:35.559964 systemd-networkd[1365]: calibe10654b9b5: Link UP Mar 7 00:54:35.562447 systemd-networkd[1365]: calibe10654b9b5: Gained carrier Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.408 [INFO][4783] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0 coredns-674b8bbfcf- kube-system e871832b-09ba-4e7c-a062-f6bcd18f87c8 996 0 2026-03-07 00:53:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-2a659a64a8 coredns-674b8bbfcf-mjqpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe10654b9b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.408 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.460 [INFO][4795] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" HandleID="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.476 [INFO][4795] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" HandleID="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002731a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-2a659a64a8", "pod":"coredns-674b8bbfcf-mjqpf", "timestamp":"2026-03-07 00:54:35.460182063 +0000 UTC"}, Hostname:"ci-4081-3-6-n-2a659a64a8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030cf20)} Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.476 [INFO][4795] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.476 [INFO][4795] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.476 [INFO][4795] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-2a659a64a8' Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.483 [INFO][4795] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.489 [INFO][4795] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.497 [INFO][4795] ipam/ipam.go 526: Trying affinity for 192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.501 [INFO][4795] ipam/ipam.go 160: Attempting to load block cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.509 [INFO][4795] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.96.192/26 host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.509 [INFO][4795] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.96.192/26 handle="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.512 [INFO][4795] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.529 [INFO][4795] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.96.192/26 handle="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.547 [INFO][4795] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.96.200/26] block=192.168.96.192/26 handle="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.547 [INFO][4795] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.96.200/26] handle="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" host="ci-4081-3-6-n-2a659a64a8" Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.548 [INFO][4795] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:35.600174 containerd[1473]: 2026-03-07 00:54:35.548 [INFO][4795] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.96.200/26] IPv6=[] ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" HandleID="k8s-pod-network.6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.601216 containerd[1473]: 2026-03-07 00:54:35.552 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e871832b-09ba-4e7c-a062-f6bcd18f87c8", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"", Pod:"coredns-674b8bbfcf-mjqpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe10654b9b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:35.601216 containerd[1473]: 2026-03-07 00:54:35.552 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.96.200/32] ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.601216 containerd[1473]: 2026-03-07 00:54:35.552 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe10654b9b5 ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.601216 containerd[1473]: 2026-03-07 00:54:35.559 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.601216 containerd[1473]: 2026-03-07 00:54:35.565 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e871832b-09ba-4e7c-a062-f6bcd18f87c8", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f", Pod:"coredns-674b8bbfcf-mjqpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe10654b9b5", MAC:"c6:40:e5:7c:b2:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:35.601216 containerd[1473]: 2026-03-07 00:54:35.589 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-mjqpf" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:35.713280 containerd[1473]: time="2026-03-07T00:54:35.712347967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:54:35.713280 containerd[1473]: time="2026-03-07T00:54:35.712455171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:54:35.713280 containerd[1473]: time="2026-03-07T00:54:35.712471851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:35.719142 containerd[1473]: time="2026-03-07T00:54:35.718829636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:54:35.758840 systemd[1]: Started cri-containerd-6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f.scope - libcontainer container 6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f. Mar 7 00:54:35.822545 containerd[1473]: time="2026-03-07T00:54:35.822300450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mjqpf,Uid:e871832b-09ba-4e7c-a062-f6bcd18f87c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f\"" Mar 7 00:54:35.839120 containerd[1473]: time="2026-03-07T00:54:35.838745790Z" level=info msg="CreateContainer within sandbox \"6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:54:35.926245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271922202.mount: Deactivated successfully. Mar 7 00:54:35.930401 containerd[1473]: time="2026-03-07T00:54:35.929412872Z" level=info msg="CreateContainer within sandbox \"6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57b1b36b586c0bedb2f0b9248ee9ab8bf34410019e7eafbc6f3868e41e8c5719\"" Mar 7 00:54:35.932076 containerd[1473]: time="2026-03-07T00:54:35.931814957Z" level=info msg="StartContainer for \"57b1b36b586c0bedb2f0b9248ee9ab8bf34410019e7eafbc6f3868e41e8c5719\"" Mar 7 00:54:35.999378 systemd[1]: run-containerd-runc-k8s.io-57b1b36b586c0bedb2f0b9248ee9ab8bf34410019e7eafbc6f3868e41e8c5719-runc.NH7kyh.mount: Deactivated successfully. Mar 7 00:54:36.009269 systemd[1]: Started cri-containerd-57b1b36b586c0bedb2f0b9248ee9ab8bf34410019e7eafbc6f3868e41e8c5719.scope - libcontainer container 57b1b36b586c0bedb2f0b9248ee9ab8bf34410019e7eafbc6f3868e41e8c5719. Mar 7 00:54:36.080515 containerd[1473]: time="2026-03-07T00:54:36.079028121Z" level=info msg="StartContainer for \"57b1b36b586c0bedb2f0b9248ee9ab8bf34410019e7eafbc6f3868e41e8c5719\" returns successfully" Mar 7 00:54:36.575641 kubelet[2572]: I0307 00:54:36.574727 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mjqpf" podStartSLOduration=46.574708757 podStartE2EDuration="46.574708757s" podCreationTimestamp="2026-03-07 00:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:54:36.542613494 +0000 UTC m=+52.549153135" watchObservedRunningTime="2026-03-07 00:54:36.574708757 +0000 UTC m=+52.581248358" Mar 7 00:54:36.938101 containerd[1473]: time="2026-03-07T00:54:36.937619110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:36.939753 containerd[1473]: time="2026-03-07T00:54:36.939622379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 00:54:36.941000 containerd[1473]: time="2026-03-07T00:54:36.940922823Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:36.949085 containerd[1473]: time="2026-03-07T00:54:36.947019713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:36.949085 containerd[1473]: time="2026-03-07T00:54:36.948431521Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.638049572s" Mar 7 00:54:36.949085 containerd[1473]: time="2026-03-07T00:54:36.948863056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 00:54:36.951631 containerd[1473]: time="2026-03-07T00:54:36.951601630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 00:54:36.978180 containerd[1473]: time="2026-03-07T00:54:36.978131462Z" level=info msg="CreateContainer within sandbox \"98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 00:54:37.000021 containerd[1473]: time="2026-03-07T00:54:36.999972813Z" level=info msg="CreateContainer within sandbox \"98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9c094cc7f5b673fc8aa8f5ca43b801226857a8c483d277c2dd5fb5ff6e353d0\"" Mar 7 00:54:37.002489 containerd[1473]: time="2026-03-07T00:54:37.001407142Z" level=info msg="StartContainer for \"c9c094cc7f5b673fc8aa8f5ca43b801226857a8c483d277c2dd5fb5ff6e353d0\"" Mar 7 00:54:37.048282 systemd[1]: Started cri-containerd-c9c094cc7f5b673fc8aa8f5ca43b801226857a8c483d277c2dd5fb5ff6e353d0.scope - libcontainer container c9c094cc7f5b673fc8aa8f5ca43b801226857a8c483d277c2dd5fb5ff6e353d0. Mar 7 00:54:37.138148 containerd[1473]: time="2026-03-07T00:54:37.138002992Z" level=info msg="StartContainer for \"c9c094cc7f5b673fc8aa8f5ca43b801226857a8c483d277c2dd5fb5ff6e353d0\" returns successfully" Mar 7 00:54:37.477394 systemd-networkd[1365]: calibe10654b9b5: Gained IPv6LL Mar 7 00:54:37.560567 kubelet[2572]: I0307 00:54:37.558240 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-559d85bfb5-ppgz2" podStartSLOduration=20.831722899 podStartE2EDuration="31.55821965s" podCreationTimestamp="2026-03-07 00:54:06 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.223882677 +0000 UTC m=+42.230422318" lastFinishedPulling="2026-03-07 00:54:36.950379428 +0000 UTC m=+52.956919069" observedRunningTime="2026-03-07 00:54:37.557859358 +0000 UTC m=+53.564398999" watchObservedRunningTime="2026-03-07 00:54:37.55821965 +0000 UTC m=+53.564759291" Mar 7 00:54:38.353035 containerd[1473]: time="2026-03-07T00:54:38.352955887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.355386 containerd[1473]: time="2026-03-07T00:54:38.355223161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 00:54:38.356686 containerd[1473]: time="2026-03-07T00:54:38.356618167Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.362620 containerd[1473]: time="2026-03-07T00:54:38.362544880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.365079 containerd[1473]: time="2026-03-07T00:54:38.363661716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.41188272s" Mar 7 00:54:38.365079 containerd[1473]: time="2026-03-07T00:54:38.363710118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 00:54:38.368905 containerd[1473]: time="2026-03-07T00:54:38.368807284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 00:54:38.380296 containerd[1473]: time="2026-03-07T00:54:38.380247616Z" level=info msg="CreateContainer within sandbox \"16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 00:54:38.404039 containerd[1473]: time="2026-03-07T00:54:38.403937588Z" level=info msg="CreateContainer within sandbox \"16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1c6eed77347c319ae4f90fefdefcaa456e2dff67c575a294801d5488db2d0b96\"" Mar 7 00:54:38.406121 containerd[1473]: time="2026-03-07T00:54:38.404535607Z" level=info msg="StartContainer for \"1c6eed77347c319ae4f90fefdefcaa456e2dff67c575a294801d5488db2d0b96\"" Mar 7 00:54:38.451516 systemd[1]: Started cri-containerd-1c6eed77347c319ae4f90fefdefcaa456e2dff67c575a294801d5488db2d0b96.scope - libcontainer container 1c6eed77347c319ae4f90fefdefcaa456e2dff67c575a294801d5488db2d0b96. Mar 7 00:54:38.494774 containerd[1473]: time="2026-03-07T00:54:38.494407415Z" level=info msg="StartContainer for \"1c6eed77347c319ae4f90fefdefcaa456e2dff67c575a294801d5488db2d0b96\" returns successfully" Mar 7 00:54:39.846209 containerd[1473]: time="2026-03-07T00:54:39.845245809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:39.848144 containerd[1473]: time="2026-03-07T00:54:39.848095140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 00:54:39.849322 containerd[1473]: time="2026-03-07T00:54:39.849257577Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:39.854846 containerd[1473]: time="2026-03-07T00:54:39.854795712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:39.856138 containerd[1473]: time="2026-03-07T00:54:39.856071233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.487140265s" Mar 7 00:54:39.856138 containerd[1473]: time="2026-03-07T00:54:39.856137835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 00:54:39.858409 containerd[1473]: time="2026-03-07T00:54:39.858348905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 00:54:39.863524 containerd[1473]: time="2026-03-07T00:54:39.863289982Z" level=info msg="CreateContainer within sandbox \"b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 00:54:39.883798 containerd[1473]: time="2026-03-07T00:54:39.883729390Z" level=info msg="CreateContainer within sandbox \"b930c9d46f219738865f8915134621fb44ccaeffea6d1228fb6f23378e6bb120\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a12d13927af8bc5d4a8dd1bf7521f37443561ac47fdb420ecd02769fe97d418d\"" Mar 7 00:54:39.885150 containerd[1473]: time="2026-03-07T00:54:39.885107874Z" level=info msg="StartContainer for \"a12d13927af8bc5d4a8dd1bf7521f37443561ac47fdb420ecd02769fe97d418d\"" Mar 7 00:54:39.935479 systemd[1]: Started cri-containerd-a12d13927af8bc5d4a8dd1bf7521f37443561ac47fdb420ecd02769fe97d418d.scope - libcontainer container a12d13927af8bc5d4a8dd1bf7521f37443561ac47fdb420ecd02769fe97d418d. Mar 7 00:54:39.972830 containerd[1473]: time="2026-03-07T00:54:39.972627249Z" level=info msg="StartContainer for \"a12d13927af8bc5d4a8dd1bf7521f37443561ac47fdb420ecd02769fe97d418d\" returns successfully" Mar 7 00:54:40.281941 kubelet[2572]: I0307 00:54:40.281741 2572 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 00:54:40.281941 kubelet[2572]: I0307 00:54:40.281786 2572 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 00:54:40.566260 kubelet[2572]: I0307 00:54:40.565837 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8p8nh" podStartSLOduration=19.344751732 podStartE2EDuration="34.565815072s" podCreationTimestamp="2026-03-07 00:54:06 +0000 UTC" firstStartedPulling="2026-03-07 00:54:24.637007636 +0000 UTC m=+40.643547277" lastFinishedPulling="2026-03-07 00:54:39.858070976 +0000 UTC m=+55.864610617" observedRunningTime="2026-03-07 00:54:40.564912404 +0000 UTC m=+56.571452085" watchObservedRunningTime="2026-03-07 00:54:40.565815072 +0000 UTC m=+56.572354713" Mar 7 00:54:44.170745 containerd[1473]: time="2026-03-07T00:54:44.170706878Z" level=info msg="StopPodSandbox for \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\"" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.247 [WARNING][5095] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea70a10d-7ce3-4275-9b2c-469af8e4ffc8", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911", Pod:"coredns-674b8bbfcf-q7gff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373f140d0be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.247 [INFO][5095] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.247 [INFO][5095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" iface="eth0" netns="" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.247 [INFO][5095] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.247 [INFO][5095] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.281 [INFO][5102] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.281 [INFO][5102] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.281 [INFO][5102] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.295 [WARNING][5102] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.295 [INFO][5102] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.298 [INFO][5102] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:44.305766 containerd[1473]: 2026-03-07 00:54:44.301 [INFO][5095] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.305766 containerd[1473]: time="2026-03-07T00:54:44.305346183Z" level=info msg="TearDown network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\" successfully" Mar 7 00:54:44.305766 containerd[1473]: time="2026-03-07T00:54:44.305375304Z" level=info msg="StopPodSandbox for \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\" returns successfully" Mar 7 00:54:44.323311 containerd[1473]: time="2026-03-07T00:54:44.323223960Z" level=info msg="RemovePodSandbox for \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\"" Mar 7 00:54:44.332412 containerd[1473]: time="2026-03-07T00:54:44.332313533Z" level=info msg="Forcibly stopping sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\"" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.389 [WARNING][5117] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ea70a10d-7ce3-4275-9b2c-469af8e4ffc8", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"49c119a2447b080c5eb72c1794a6d5dccfd096b3e83b9a78d0d29eb61c0b0911", Pod:"coredns-674b8bbfcf-q7gff", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali373f140d0be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.389 [INFO][5117] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.389 [INFO][5117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" iface="eth0" netns="" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.389 [INFO][5117] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.389 [INFO][5117] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.431 [INFO][5125] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.431 [INFO][5125] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.431 [INFO][5125] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.452 [WARNING][5125] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.452 [INFO][5125] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" HandleID="k8s-pod-network.2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--q7gff-eth0" Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.455 [INFO][5125] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:44.460176 containerd[1473]: 2026-03-07 00:54:44.457 [INFO][5117] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90" Mar 7 00:54:44.460176 containerd[1473]: time="2026-03-07T00:54:44.460011085Z" level=info msg="TearDown network for sandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\" successfully" Mar 7 00:54:44.472316 containerd[1473]: time="2026-03-07T00:54:44.472196344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:44.472968 containerd[1473]: time="2026-03-07T00:54:44.472486832Z" level=info msg="RemovePodSandbox \"2ef05ac07119f985e48d975fb798b13ae7c67dd8e3a030de2615c6ab8c92ae90\" returns successfully" Mar 7 00:54:44.473598 containerd[1473]: time="2026-03-07T00:54:44.473236813Z" level=info msg="StopPodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\"" Mar 7 00:54:44.533545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795029941.mount: Deactivated successfully. Mar 7 00:54:44.559498 containerd[1473]: time="2026-03-07T00:54:44.559428010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:44.562519 containerd[1473]: time="2026-03-07T00:54:44.562035882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 00:54:44.562885 containerd[1473]: time="2026-03-07T00:54:44.562850225Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:44.568085 containerd[1473]: time="2026-03-07T00:54:44.567949167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:44.569569 containerd[1473]: time="2026-03-07T00:54:44.569440968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 4.711034262s" Mar 7 00:54:44.569569 containerd[1473]: time="2026-03-07T00:54:44.569489010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 00:54:44.576668 containerd[1473]: time="2026-03-07T00:54:44.576322360Z" level=info msg="CreateContainer within sandbox \"16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 00:54:44.597674 containerd[1473]: time="2026-03-07T00:54:44.597625032Z" level=info msg="CreateContainer within sandbox \"16bedd5a3d79f53a394598cfc00cefc06b0251641b0bb3379ad193ae399f3ddc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a6c1d54657fc256bdbdb076a5b2a3e25b80c4676025499952c478d5ba3244603\"" Mar 7 00:54:44.599227 containerd[1473]: time="2026-03-07T00:54:44.599188236Z" level=info msg="StartContainer for \"a6c1d54657fc256bdbdb076a5b2a3e25b80c4676025499952c478d5ba3244603\"" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.544 [WARNING][5140] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e871832b-09ba-4e7c-a062-f6bcd18f87c8", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f", Pod:"coredns-674b8bbfcf-mjqpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe10654b9b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.544 [INFO][5140] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.549 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" iface="eth0" netns="" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.549 [INFO][5140] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.549 [INFO][5140] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.587 [INFO][5153] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.589 [INFO][5153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.589 [INFO][5153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.607 [WARNING][5153] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.607 [INFO][5153] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.610 [INFO][5153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:44.617488 containerd[1473]: 2026-03-07 00:54:44.613 [INFO][5140] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.619564 containerd[1473]: time="2026-03-07T00:54:44.618822582Z" level=info msg="TearDown network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" successfully" Mar 7 00:54:44.625163 containerd[1473]: time="2026-03-07T00:54:44.624648464Z" level=info msg="StopPodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" returns successfully" Mar 7 00:54:44.628090 containerd[1473]: time="2026-03-07T00:54:44.627354779Z" level=info msg="RemovePodSandbox for \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\"" Mar 7 00:54:44.628335 containerd[1473]: time="2026-03-07T00:54:44.628289725Z" level=info msg="Forcibly stopping sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\"" Mar 7 00:54:44.658332 systemd[1]: Started cri-containerd-a6c1d54657fc256bdbdb076a5b2a3e25b80c4676025499952c478d5ba3244603.scope - libcontainer container a6c1d54657fc256bdbdb076a5b2a3e25b80c4676025499952c478d5ba3244603. Mar 7 00:54:44.721067 containerd[1473]: time="2026-03-07T00:54:44.720841099Z" level=info msg="StartContainer for \"a6c1d54657fc256bdbdb076a5b2a3e25b80c4676025499952c478d5ba3244603\" returns successfully" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.702 [WARNING][5184] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e871832b-09ba-4e7c-a062-f6bcd18f87c8", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 53, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"6170dc63bfa0ab94bba33235e0faa926e6a04fb00bc0d8251552bcfafb48cd3f", Pod:"coredns-674b8bbfcf-mjqpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.96.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe10654b9b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.705 [INFO][5184] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.705 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" iface="eth0" netns="" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.705 [INFO][5184] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.705 [INFO][5184] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.746 [INFO][5200] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.746 [INFO][5200] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.746 [INFO][5200] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.757 [WARNING][5200] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.757 [INFO][5200] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" HandleID="k8s-pod-network.51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Workload="ci--4081--3--6--n--2a659a64a8-k8s-coredns--674b8bbfcf--mjqpf-eth0" Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.760 [INFO][5200] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:44.765170 containerd[1473]: 2026-03-07 00:54:44.762 [INFO][5184] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0" Mar 7 00:54:44.765170 containerd[1473]: time="2026-03-07T00:54:44.764963287Z" level=info msg="TearDown network for sandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" successfully" Mar 7 00:54:44.770198 containerd[1473]: time="2026-03-07T00:54:44.769936785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:44.770198 containerd[1473]: time="2026-03-07T00:54:44.770030828Z" level=info msg="RemovePodSandbox \"51c680db654a491b60568f31d70584adb3a6d833a9552799f3e461bd2a87c1d0\" returns successfully" Mar 7 00:54:44.771115 containerd[1473]: time="2026-03-07T00:54:44.770752688Z" level=info msg="StopPodSandbox for \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\"" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.819 [WARNING][5226] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0", GenerateName:"calico-kube-controllers-559d85bfb5-", Namespace:"calico-system", SelfLink:"", UID:"09e69c3c-3f2b-493c-a453-9280256b7c38", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559d85bfb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae", Pod:"calico-kube-controllers-559d85bfb5-ppgz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid745776d435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.820 [INFO][5226] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.820 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" iface="eth0" netns="" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.820 [INFO][5226] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.820 [INFO][5226] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.849 [INFO][5233] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.849 [INFO][5233] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.849 [INFO][5233] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.862 [WARNING][5233] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.863 [INFO][5233] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.865 [INFO][5233] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:44.870777 containerd[1473]: 2026-03-07 00:54:44.868 [INFO][5226] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.871793 containerd[1473]: time="2026-03-07T00:54:44.871621413Z" level=info msg="TearDown network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\" successfully" Mar 7 00:54:44.871793 containerd[1473]: time="2026-03-07T00:54:44.871664734Z" level=info msg="StopPodSandbox for \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\" returns successfully" Mar 7 00:54:44.872590 containerd[1473]: time="2026-03-07T00:54:44.872287552Z" level=info msg="RemovePodSandbox for \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\"" Mar 7 00:54:44.872590 containerd[1473]: time="2026-03-07T00:54:44.872322153Z" level=info msg="Forcibly stopping sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\"" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.919 [WARNING][5247] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0", GenerateName:"calico-kube-controllers-559d85bfb5-", Namespace:"calico-system", SelfLink:"", UID:"09e69c3c-3f2b-493c-a453-9280256b7c38", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"559d85bfb5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"98be4197cefbd9e23556cf431811a1789b1f2316a885a9f666fbae00c184a9ae", Pod:"calico-kube-controllers-559d85bfb5-ppgz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.96.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid745776d435", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.919 [INFO][5247] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.919 [INFO][5247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" iface="eth0" netns="" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.920 [INFO][5247] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.920 [INFO][5247] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.944 [INFO][5255] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.944 [INFO][5255] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.944 [INFO][5255] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.957 [WARNING][5255] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.957 [INFO][5255] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" HandleID="k8s-pod-network.440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Workload="ci--4081--3--6--n--2a659a64a8-k8s-calico--kube--controllers--559d85bfb5--ppgz2-eth0" Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.960 [INFO][5255] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:44.964815 containerd[1473]: 2026-03-07 00:54:44.962 [INFO][5247] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1" Mar 7 00:54:44.965454 containerd[1473]: time="2026-03-07T00:54:44.964880847Z" level=info msg="TearDown network for sandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\" successfully" Mar 7 00:54:44.969423 containerd[1473]: time="2026-03-07T00:54:44.969362252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:44.969849 containerd[1473]: time="2026-03-07T00:54:44.969495895Z" level=info msg="RemovePodSandbox \"440032d2829848663f82a13f2b9363ba8fd9fe7c6a3f7216f310ee040d9f26a1\" returns successfully" Mar 7 00:54:44.970486 containerd[1473]: time="2026-03-07T00:54:44.970099592Z" level=info msg="StopPodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\"" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.017 [WARNING][5269] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.017 [INFO][5269] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.017 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" iface="eth0" netns="" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.017 [INFO][5269] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.017 [INFO][5269] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.046 [INFO][5276] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.046 [INFO][5276] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.046 [INFO][5276] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.059 [WARNING][5276] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.059 [INFO][5276] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.061 [INFO][5276] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:45.066495 containerd[1473]: 2026-03-07 00:54:45.063 [INFO][5269] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.066495 containerd[1473]: time="2026-03-07T00:54:45.066376824Z" level=info msg="TearDown network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" successfully" Mar 7 00:54:45.066495 containerd[1473]: time="2026-03-07T00:54:45.066415305Z" level=info msg="StopPodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" returns successfully" Mar 7 00:54:45.069572 containerd[1473]: time="2026-03-07T00:54:45.069011295Z" level=info msg="RemovePodSandbox for \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\"" Mar 7 00:54:45.069572 containerd[1473]: time="2026-03-07T00:54:45.069580871Z" level=info msg="Forcibly stopping sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\"" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.116 [WARNING][5290] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" WorkloadEndpoint="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.116 [INFO][5290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.116 [INFO][5290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" iface="eth0" netns="" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.116 [INFO][5290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.116 [INFO][5290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.144 [INFO][5298] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.145 [INFO][5298] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.145 [INFO][5298] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.160 [WARNING][5298] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.160 [INFO][5298] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" HandleID="k8s-pod-network.064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Workload="ci--4081--3--6--n--2a659a64a8-k8s-whisker--77997bc986--2dg5x-eth0" Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.163 [INFO][5298] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:45.167505 containerd[1473]: 2026-03-07 00:54:45.165 [INFO][5290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d" Mar 7 00:54:45.168022 containerd[1473]: time="2026-03-07T00:54:45.167576967Z" level=info msg="TearDown network for sandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" successfully" Mar 7 00:54:45.172752 containerd[1473]: time="2026-03-07T00:54:45.172675545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:45.173467 containerd[1473]: time="2026-03-07T00:54:45.172818789Z" level=info msg="RemovePodSandbox \"064d2a04f85e62d01ae3692f7c1b63627e5f810dce2faa45be156b82d9ed692d\" returns successfully" Mar 7 00:54:45.174137 containerd[1473]: time="2026-03-07T00:54:45.174068103Z" level=info msg="StopPodSandbox for \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\"" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.224 [WARNING][5312] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"12a36780-8f9b-49b6-ae7a-05cc5d985d69", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f", Pod:"goldmane-5b85766d88-6gzpk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec156762035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.225 [INFO][5312] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.225 [INFO][5312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" iface="eth0" netns="" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.225 [INFO][5312] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.225 [INFO][5312] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.251 [INFO][5320] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.251 [INFO][5320] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.251 [INFO][5320] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.266 [WARNING][5320] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.267 [INFO][5320] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.269 [INFO][5320] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:45.275604 containerd[1473]: 2026-03-07 00:54:45.273 [INFO][5312] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.276977 containerd[1473]: time="2026-03-07T00:54:45.275693977Z" level=info msg="TearDown network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\" successfully" Mar 7 00:54:45.276977 containerd[1473]: time="2026-03-07T00:54:45.275732779Z" level=info msg="StopPodSandbox for \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\" returns successfully" Mar 7 00:54:45.276977 containerd[1473]: time="2026-03-07T00:54:45.276369796Z" level=info msg="RemovePodSandbox for \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\"" Mar 7 00:54:45.276977 containerd[1473]: time="2026-03-07T00:54:45.276405437Z" level=info msg="Forcibly stopping sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\"" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.330 [WARNING][5334] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"12a36780-8f9b-49b6-ae7a-05cc5d985d69", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 54, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-2a659a64a8", ContainerID:"5557a37c0caf79717fbcbcad25fbcca7e8ec5dd41715305289f9877f5639cd0f", Pod:"goldmane-5b85766d88-6gzpk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.96.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliec156762035", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.331 [INFO][5334] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.331 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" iface="eth0" netns="" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.331 [INFO][5334] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.331 [INFO][5334] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.358 [INFO][5342] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.358 [INFO][5342] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.358 [INFO][5342] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.369 [WARNING][5342] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.369 [INFO][5342] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" HandleID="k8s-pod-network.9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Workload="ci--4081--3--6--n--2a659a64a8-k8s-goldmane--5b85766d88--6gzpk-eth0" Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.372 [INFO][5342] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:54:45.378466 containerd[1473]: 2026-03-07 00:54:45.374 [INFO][5334] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5" Mar 7 00:54:45.378466 containerd[1473]: time="2026-03-07T00:54:45.377947909Z" level=info msg="TearDown network for sandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\" successfully" Mar 7 00:54:45.384295 containerd[1473]: time="2026-03-07T00:54:45.383435178Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:54:45.384295 containerd[1473]: time="2026-03-07T00:54:45.383543901Z" level=info msg="RemovePodSandbox \"9c0b240c1ec03523ed72ca8a1f06597fdb3129ef97e03889952ff63500db75e5\" returns successfully" Mar 7 00:54:45.610193 kubelet[2572]: I0307 00:54:45.610106 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7688855859-jw2d4" podStartSLOduration=2.789376423 podStartE2EDuration="20.610085121s" podCreationTimestamp="2026-03-07 00:54:25 +0000 UTC" firstStartedPulling="2026-03-07 00:54:26.750081948 +0000 UTC m=+42.756621589" lastFinishedPulling="2026-03-07 00:54:44.570790646 +0000 UTC m=+60.577330287" observedRunningTime="2026-03-07 00:54:45.609067854 +0000 UTC m=+61.615607535" watchObservedRunningTime="2026-03-07 00:54:45.610085121 +0000 UTC m=+61.616624842" Mar 7 00:54:59.224605 kubelet[2572]: I0307 00:54:59.223921 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:54:59.712535 kubelet[2572]: I0307 00:54:59.711881 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:17.335368 systemd[1]: Started sshd@7-116.202.31.117:22-20.161.92.111:57678.service - OpenSSH per-connection server daemon (20.161.92.111:57678). Mar 7 00:56:17.929109 sshd[5714]: Accepted publickey for core from 20.161.92.111 port 57678 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:17.931029 sshd[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:17.938767 systemd-logind[1455]: New session 8 of user core. Mar 7 00:56:17.948795 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:56:18.445745 sshd[5714]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:18.451379 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:56:18.452803 systemd[1]: sshd@7-116.202.31.117:22-20.161.92.111:57678.service: Deactivated successfully. Mar 7 00:56:18.456923 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:56:18.459287 systemd-logind[1455]: Removed session 8. Mar 7 00:56:23.562730 systemd[1]: Started sshd@8-116.202.31.117:22-20.161.92.111:37772.service - OpenSSH per-connection server daemon (20.161.92.111:37772). Mar 7 00:56:24.151162 sshd[5730]: Accepted publickey for core from 20.161.92.111 port 37772 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:24.154219 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:24.160644 systemd-logind[1455]: New session 9 of user core. Mar 7 00:56:24.165251 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:56:24.651282 sshd[5730]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:24.656817 systemd[1]: sshd@8-116.202.31.117:22-20.161.92.111:37772.service: Deactivated successfully. Mar 7 00:56:24.659941 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:56:24.660805 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:56:24.661827 systemd-logind[1455]: Removed session 9. Mar 7 00:56:29.761541 systemd[1]: Started sshd@9-116.202.31.117:22-20.161.92.111:37786.service - OpenSSH per-connection server daemon (20.161.92.111:37786). Mar 7 00:56:30.352444 sshd[5785]: Accepted publickey for core from 20.161.92.111 port 37786 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:30.356340 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:30.361414 systemd-logind[1455]: New session 10 of user core. Mar 7 00:56:30.369442 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:56:30.852413 sshd[5785]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:30.858891 systemd[1]: sshd@9-116.202.31.117:22-20.161.92.111:37786.service: Deactivated successfully. Mar 7 00:56:30.861737 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:56:30.862671 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:56:30.864132 systemd-logind[1455]: Removed session 10. Mar 7 00:56:35.963520 systemd[1]: Started sshd@10-116.202.31.117:22-20.161.92.111:54334.service - OpenSSH per-connection server daemon (20.161.92.111:54334). Mar 7 00:56:36.553677 sshd[5838]: Accepted publickey for core from 20.161.92.111 port 54334 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:36.557782 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:36.578361 systemd-logind[1455]: New session 11 of user core. Mar 7 00:56:36.584289 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:56:37.066365 sshd[5838]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:37.070885 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:56:37.071023 systemd[1]: sshd@10-116.202.31.117:22-20.161.92.111:54334.service: Deactivated successfully. Mar 7 00:56:37.074701 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:56:37.079595 systemd-logind[1455]: Removed session 11. Mar 7 00:56:37.181427 systemd[1]: Started sshd@11-116.202.31.117:22-20.161.92.111:54346.service - OpenSSH per-connection server daemon (20.161.92.111:54346). Mar 7 00:56:37.768788 sshd[5852]: Accepted publickey for core from 20.161.92.111 port 54346 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:37.770220 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:37.776037 systemd-logind[1455]: New session 12 of user core. Mar 7 00:56:37.781296 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:56:38.310972 sshd[5852]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:38.317092 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:56:38.317609 systemd[1]: sshd@11-116.202.31.117:22-20.161.92.111:54346.service: Deactivated successfully. Mar 7 00:56:38.320365 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:56:38.323308 systemd-logind[1455]: Removed session 12. Mar 7 00:56:38.426645 systemd[1]: Started sshd@12-116.202.31.117:22-20.161.92.111:54350.service - OpenSSH per-connection server daemon (20.161.92.111:54350). Mar 7 00:56:39.011081 sshd[5881]: Accepted publickey for core from 20.161.92.111 port 54350 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:39.012932 sshd[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:39.018608 systemd-logind[1455]: New session 13 of user core. Mar 7 00:56:39.030431 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:56:39.510015 sshd[5881]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:39.516102 systemd[1]: sshd@12-116.202.31.117:22-20.161.92.111:54350.service: Deactivated successfully. Mar 7 00:56:39.520614 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:56:39.522168 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:56:39.523648 systemd-logind[1455]: Removed session 13. Mar 7 00:56:44.623431 systemd[1]: Started sshd@13-116.202.31.117:22-20.161.92.111:58374.service - OpenSSH per-connection server daemon (20.161.92.111:58374). Mar 7 00:56:45.217133 sshd[5895]: Accepted publickey for core from 20.161.92.111 port 58374 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:45.218952 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:45.224724 systemd-logind[1455]: New session 14 of user core. Mar 7 00:56:45.235410 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:56:45.718988 sshd[5895]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:45.726001 systemd[1]: sshd@13-116.202.31.117:22-20.161.92.111:58374.service: Deactivated successfully. Mar 7 00:56:45.728476 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:56:45.729514 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:56:45.731217 systemd-logind[1455]: Removed session 14. Mar 7 00:56:45.832184 systemd[1]: Started sshd@14-116.202.31.117:22-20.161.92.111:58376.service - OpenSSH per-connection server daemon (20.161.92.111:58376). Mar 7 00:56:46.427176 sshd[5908]: Accepted publickey for core from 20.161.92.111 port 58376 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:46.428774 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:46.434677 systemd-logind[1455]: New session 15 of user core. Mar 7 00:56:46.440267 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:56:47.115360 sshd[5908]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:47.121434 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:56:47.122356 systemd[1]: sshd@14-116.202.31.117:22-20.161.92.111:58376.service: Deactivated successfully. Mar 7 00:56:47.125748 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:56:47.127158 systemd-logind[1455]: Removed session 15. Mar 7 00:56:47.235561 systemd[1]: Started sshd@15-116.202.31.117:22-20.161.92.111:58382.service - OpenSSH per-connection server daemon (20.161.92.111:58382). Mar 7 00:56:47.824301 sshd[5919]: Accepted publickey for core from 20.161.92.111 port 58382 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:47.826826 sshd[5919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:47.838339 systemd-logind[1455]: New session 16 of user core. Mar 7 00:56:47.844286 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:56:48.962976 sshd[5919]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:48.968365 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:56:48.969504 systemd[1]: sshd@15-116.202.31.117:22-20.161.92.111:58382.service: Deactivated successfully. Mar 7 00:56:48.971559 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:56:48.973293 systemd-logind[1455]: Removed session 16. Mar 7 00:56:49.067442 systemd[1]: Started sshd@16-116.202.31.117:22-20.161.92.111:58396.service - OpenSSH per-connection server daemon (20.161.92.111:58396). Mar 7 00:56:49.663585 sshd[5946]: Accepted publickey for core from 20.161.92.111 port 58396 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:49.666602 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:49.671984 systemd-logind[1455]: New session 17 of user core. Mar 7 00:56:49.680514 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:56:50.322830 sshd[5946]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:50.329015 systemd[1]: sshd@16-116.202.31.117:22-20.161.92.111:58396.service: Deactivated successfully. Mar 7 00:56:50.329103 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:56:50.333381 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:56:50.334736 systemd-logind[1455]: Removed session 17. Mar 7 00:56:50.434438 systemd[1]: Started sshd@17-116.202.31.117:22-20.161.92.111:60716.service - OpenSSH per-connection server daemon (20.161.92.111:60716). Mar 7 00:56:51.023001 sshd[5957]: Accepted publickey for core from 20.161.92.111 port 60716 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:51.026770 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:51.032076 systemd-logind[1455]: New session 18 of user core. Mar 7 00:56:51.041889 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:56:51.525210 sshd[5957]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:51.530636 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:56:51.531006 systemd[1]: sshd@17-116.202.31.117:22-20.161.92.111:60716.service: Deactivated successfully. Mar 7 00:56:51.534916 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:56:51.540360 systemd-logind[1455]: Removed session 18. Mar 7 00:56:56.636557 systemd[1]: Started sshd@18-116.202.31.117:22-20.161.92.111:60720.service - OpenSSH per-connection server daemon (20.161.92.111:60720). Mar 7 00:56:56.702834 systemd[1]: run-containerd-runc-k8s.io-9349d5f4ee9eac7622a9bb74ce94f2b8908e273fed82ef71220fcf485d17f5a2-runc.8K6uYe.mount: Deactivated successfully. Mar 7 00:56:57.223096 sshd[5974]: Accepted publickey for core from 20.161.92.111 port 60720 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:56:57.226874 sshd[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:57.234716 systemd-logind[1455]: New session 19 of user core. Mar 7 00:56:57.243423 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:56:57.735539 sshd[5974]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:57.742302 systemd[1]: sshd@18-116.202.31.117:22-20.161.92.111:60720.service: Deactivated successfully. Mar 7 00:56:57.747012 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:56:57.752183 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:56:57.756586 systemd-logind[1455]: Removed session 19. Mar 7 00:57:02.846707 systemd[1]: Started sshd@19-116.202.31.117:22-20.161.92.111:44364.service - OpenSSH per-connection server daemon (20.161.92.111:44364). Mar 7 00:57:03.432148 sshd[6027]: Accepted publickey for core from 20.161.92.111 port 44364 ssh2: RSA SHA256:fFFMlaCBm9OkQatq7Cg+moKRVH6SG+EKtX7SFDagfEI Mar 7 00:57:03.435422 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:03.441470 systemd-logind[1455]: New session 20 of user core. Mar 7 00:57:03.448436 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:57:03.940728 sshd[6027]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:03.948030 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:57:03.948869 systemd[1]: sshd@19-116.202.31.117:22-20.161.92.111:44364.service: Deactivated successfully. Mar 7 00:57:03.951729 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:57:03.952888 systemd-logind[1455]: Removed session 20. Mar 7 00:57:07.870526 systemd[1]: run-containerd-runc-k8s.io-c9c094cc7f5b673fc8aa8f5ca43b801226857a8c483d277c2dd5fb5ff6e353d0-runc.vlz9wG.mount: Deactivated successfully. Mar 7 00:57:18.456191 systemd[1]: cri-containerd-0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636.scope: Deactivated successfully. Mar 7 00:57:18.456807 systemd[1]: cri-containerd-0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636.scope: Consumed 16.206s CPU time. Mar 7 00:57:18.482150 containerd[1473]: time="2026-03-07T00:57:18.482002253Z" level=info msg="shim disconnected" id=0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636 namespace=k8s.io Mar 7 00:57:18.482150 containerd[1473]: time="2026-03-07T00:57:18.482148176Z" level=warning msg="cleaning up after shim disconnected" id=0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636 namespace=k8s.io Mar 7 00:57:18.482150 containerd[1473]: time="2026-03-07T00:57:18.482158617Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:18.483698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636-rootfs.mount: Deactivated successfully. Mar 7 00:57:18.560602 kubelet[2572]: E0307 00:57:18.556265 2572 controller.go:195] "Failed to update lease" err="Put \"https://116.202.31.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-2a659a64a8?timeout=10s\": context deadline exceeded" Mar 7 00:57:18.850597 kubelet[2572]: E0307 00:57:18.827420 2572 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33736->10.0.0.2:2379: read: connection timed out" Mar 7 00:57:19.101552 kubelet[2572]: I0307 00:57:19.101441 2572 scope.go:117] "RemoveContainer" containerID="0511cab5d3cb2f85a70fc66f3da758bca8d89b8341ace4447e589f232e0ec636" Mar 7 00:57:19.104674 containerd[1473]: time="2026-03-07T00:57:19.104630809Z" level=info msg="CreateContainer within sandbox \"b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 00:57:19.123083 containerd[1473]: time="2026-03-07T00:57:19.123029497Z" level=info msg="CreateContainer within sandbox \"b8c484b9a085700fc34763917e9e1388464439ee7fb3bdc89a3d6deff669e00b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"faae2170bd02a65eca1d04556bbfd4f258fbe413e972d0308a3b250ec3db35fe\"" Mar 7 00:57:19.124003 containerd[1473]: time="2026-03-07T00:57:19.123959116Z" level=info msg="StartContainer for \"faae2170bd02a65eca1d04556bbfd4f258fbe413e972d0308a3b250ec3db35fe\"" Mar 7 00:57:19.146710 systemd[1]: cri-containerd-6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f.scope: Deactivated successfully. Mar 7 00:57:19.148122 systemd[1]: cri-containerd-6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f.scope: Consumed 4.578s CPU time, 16.0M memory peak, 0B memory swap peak. Mar 7 00:57:19.182175 systemd[1]: Started cri-containerd-faae2170bd02a65eca1d04556bbfd4f258fbe413e972d0308a3b250ec3db35fe.scope - libcontainer container faae2170bd02a65eca1d04556bbfd4f258fbe413e972d0308a3b250ec3db35fe. Mar 7 00:57:19.195394 containerd[1473]: time="2026-03-07T00:57:19.195308224Z" level=info msg="shim disconnected" id=6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f namespace=k8s.io Mar 7 00:57:19.195394 containerd[1473]: time="2026-03-07T00:57:19.195394585Z" level=warning msg="cleaning up after shim disconnected" id=6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f namespace=k8s.io Mar 7 00:57:19.195594 containerd[1473]: time="2026-03-07T00:57:19.195413266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:19.235663 containerd[1473]: time="2026-03-07T00:57:19.235602230Z" level=info msg="StartContainer for \"faae2170bd02a65eca1d04556bbfd4f258fbe413e972d0308a3b250ec3db35fe\" returns successfully" Mar 7 00:57:19.485260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f-rootfs.mount: Deactivated successfully. Mar 7 00:57:20.113939 kubelet[2572]: I0307 00:57:20.113528 2572 scope.go:117] "RemoveContainer" containerID="6253c2bb3b8c5f7f00f36bc0c5fea2632f76f4e4764d2d4ef7d27b73725bf00f" Mar 7 00:57:20.119966 containerd[1473]: time="2026-03-07T00:57:20.118490315Z" level=info msg="CreateContainer within sandbox \"53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:57:20.151756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414847827.mount: Deactivated successfully. Mar 7 00:57:20.158598 containerd[1473]: time="2026-03-07T00:57:20.158555229Z" level=info msg="CreateContainer within sandbox \"53dd608abfeaa5410b4915c8cdec955e7e1a0fd0f17adf2646da7ab11d1d1f2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8e5b2ccaba893df2838d785ae025d0a70b39270189e4ee9c650d2ac822bd4361\"" Mar 7 00:57:20.160685 containerd[1473]: time="2026-03-07T00:57:20.160651351Z" level=info msg="StartContainer for \"8e5b2ccaba893df2838d785ae025d0a70b39270189e4ee9c650d2ac822bd4361\"" Mar 7 00:57:20.204487 systemd[1]: Started cri-containerd-8e5b2ccaba893df2838d785ae025d0a70b39270189e4ee9c650d2ac822bd4361.scope - libcontainer container 8e5b2ccaba893df2838d785ae025d0a70b39270189e4ee9c650d2ac822bd4361. Mar 7 00:57:20.252315 containerd[1473]: time="2026-03-07T00:57:20.252255447Z" level=info msg="StartContainer for \"8e5b2ccaba893df2838d785ae025d0a70b39270189e4ee9c650d2ac822bd4361\" returns successfully" Mar 7 00:57:23.498998 kubelet[2572]: E0307 00:57:23.496233 2572 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33544->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-2a659a64a8.189a692a22d68252 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-2a659a64a8,UID:ffa1bb3ecffc39f583eb0d17d1029745,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-2a659a64a8,},FirstTimestamp:2026-03-07 00:57:13.01773781 +0000 UTC m=+209.024277451,LastTimestamp:2026-03-07 00:57:13.01773781 +0000 UTC m=+209.024277451,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-2a659a64a8,}"