Apr 23 23:13:24.821191 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 23 23:13:24.821213 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 23 21:57:58 -00 2026 Apr 23 23:13:24.821222 kernel: KASLR enabled Apr 23 23:13:24.821228 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Apr 23 23:13:24.821233 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Apr 23 23:13:24.821239 kernel: random: crng init done Apr 23 23:13:24.821245 kernel: secureboot: Secure boot disabled Apr 23 23:13:24.821251 kernel: ACPI: Early table checksum verification disabled Apr 23 23:13:24.821257 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Apr 23 23:13:24.821263 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Apr 23 23:13:24.821270 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821276 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821282 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821287 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821295 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821302 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821308 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821314 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821320 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 23 23:13:24.821326 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Apr 23 23:13:24.821332 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Apr 23 23:13:24.821338 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 23 23:13:24.821344 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Apr 23 23:13:24.821350 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Apr 23 23:13:24.821356 kernel: Zone ranges: Apr 23 23:13:24.821363 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 23 23:13:24.821370 kernel: DMA32 empty Apr 23 23:13:24.821376 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Apr 23 23:13:24.821382 kernel: Device empty Apr 23 23:13:24.821388 kernel: Movable zone start for each node Apr 23 23:13:24.821394 kernel: Early memory node ranges Apr 23 23:13:24.821400 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Apr 23 23:13:24.821406 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Apr 23 23:13:24.821413 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Apr 23 23:13:24.821419 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Apr 23 23:13:24.821425 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Apr 23 23:13:24.821431 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Apr 23 23:13:24.821437 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Apr 23 23:13:24.821445 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Apr 23 23:13:24.821503 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Apr 23 23:13:24.821513 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Apr 23 23:13:24.821520 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Apr 23 23:13:24.821527 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Apr 23 23:13:24.821534 kernel: psci: probing for conduit method from ACPI. Apr 23 23:13:24.821541 kernel: psci: PSCIv1.1 detected in firmware. Apr 23 23:13:24.821547 kernel: psci: Using standard PSCI v0.2 function IDs Apr 23 23:13:24.821554 kernel: psci: Trusted OS migration not required Apr 23 23:13:24.821560 kernel: psci: SMC Calling Convention v1.1 Apr 23 23:13:24.821567 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 23 23:13:24.821573 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 23 23:13:24.821580 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 23 23:13:24.821586 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 23 23:13:24.821593 kernel: Detected PIPT I-cache on CPU0 Apr 23 23:13:24.821600 kernel: CPU features: detected: GIC system register CPU interface Apr 23 23:13:24.821607 kernel: CPU features: detected: Spectre-v4 Apr 23 23:13:24.821614 kernel: CPU features: detected: Spectre-BHB Apr 23 23:13:24.821620 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 23 23:13:24.821627 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 23 23:13:24.821633 kernel: CPU features: detected: ARM erratum 1418040 Apr 23 23:13:24.821640 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 23 23:13:24.821646 kernel: alternatives: applying boot alternatives Apr 23 23:13:24.821654 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8669c84e6bfac0c003f3ced682d9b5c0fda27fc2948639441be65941607b4c3d Apr 23 23:13:24.821661 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 23 23:13:24.821667 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 23 23:13:24.821673 kernel: Fallback order for Node 0: 0 Apr 23 23:13:24.821681 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Apr 23 23:13:24.821691 kernel: Policy zone: Normal Apr 23 23:13:24.821698 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 23 23:13:24.821706 kernel: software IO TLB: area num 2. Apr 23 23:13:24.821713 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Apr 23 23:13:24.821721 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 23 23:13:24.821727 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 23 23:13:24.821735 kernel: rcu: RCU event tracing is enabled. Apr 23 23:13:24.821741 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 23 23:13:24.821748 kernel: Trampoline variant of Tasks RCU enabled. Apr 23 23:13:24.821755 kernel: Tracing variant of Tasks RCU enabled. Apr 23 23:13:24.821761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 23 23:13:24.821769 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 23 23:13:24.821776 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:13:24.821783 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 23 23:13:24.821789 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 23 23:13:24.821795 kernel: GICv3: 256 SPIs implemented Apr 23 23:13:24.821802 kernel: GICv3: 0 Extended SPIs implemented Apr 23 23:13:24.821808 kernel: Root IRQ handler: gic_handle_irq Apr 23 23:13:24.821814 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 23 23:13:24.821821 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 23 23:13:24.821827 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 23 23:13:24.821834 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 23 23:13:24.821841 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Apr 23 23:13:24.821848 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Apr 23 23:13:24.821855 kernel: GICv3: using LPI property table @0x0000000100120000 Apr 23 23:13:24.821861 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Apr 23 23:13:24.821868 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 23 23:13:24.821874 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 23 23:13:24.821881 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 23 23:13:24.821887 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 23 23:13:24.821894 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 23 23:13:24.821900 kernel: Console: colour dummy device 80x25 Apr 23 23:13:24.821907 kernel: ACPI: Core revision 20240827 Apr 23 23:13:24.821916 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 23 23:13:24.821923 kernel: pid_max: default: 32768 minimum: 301 Apr 23 23:13:24.821929 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 23 23:13:24.821937 kernel: landlock: Up and running. Apr 23 23:13:24.821943 kernel: SELinux: Initializing. Apr 23 23:13:24.821950 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:13:24.821956 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 23 23:13:24.821963 kernel: rcu: Hierarchical SRCU implementation. Apr 23 23:13:24.821969 kernel: rcu: Max phase no-delay instances is 400. Apr 23 23:13:24.821977 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 23 23:13:24.821984 kernel: Remapping and enabling EFI services. Apr 23 23:13:24.821990 kernel: smp: Bringing up secondary CPUs ... Apr 23 23:13:24.821997 kernel: Detected PIPT I-cache on CPU1 Apr 23 23:13:24.822004 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 23 23:13:24.822010 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Apr 23 23:13:24.822017 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 23 23:13:24.822052 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 23 23:13:24.822059 kernel: smp: Brought up 1 node, 2 CPUs Apr 23 23:13:24.822069 kernel: SMP: Total of 2 processors activated. Apr 23 23:13:24.822080 kernel: CPU: All CPU(s) started at EL1 Apr 23 23:13:24.822087 kernel: CPU features: detected: 32-bit EL0 Support Apr 23 23:13:24.822095 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 23 23:13:24.822102 kernel: CPU features: detected: Common not Private translations Apr 23 23:13:24.822109 kernel: CPU features: detected: CRC32 instructions Apr 23 23:13:24.822116 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 23 23:13:24.822123 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 23 23:13:24.822132 kernel: CPU features: detected: LSE atomic instructions Apr 23 23:13:24.822139 kernel: CPU features: detected: Privileged Access Never Apr 23 23:13:24.822146 kernel: CPU features: detected: RAS Extension Support Apr 23 23:13:24.822153 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 23 23:13:24.822160 kernel: alternatives: applying system-wide alternatives Apr 23 23:13:24.822167 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Apr 23 23:13:24.822174 kernel: Memory: 3858780K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 215732K reserved, 16384K cma-reserved) Apr 23 23:13:24.822181 kernel: devtmpfs: initialized Apr 23 23:13:24.822188 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 23 23:13:24.822196 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 23 23:13:24.822203 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 23 23:13:24.822210 kernel: 0 pages in range for non-PLT usage Apr 23 23:13:24.822217 kernel: 508384 pages in range for PLT usage Apr 23 23:13:24.822224 kernel: pinctrl core: initialized pinctrl subsystem Apr 23 23:13:24.822230 kernel: SMBIOS 3.0.0 present. Apr 23 23:13:24.822237 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Apr 23 23:13:24.822244 kernel: DMI: Memory slots populated: 1/1 Apr 23 23:13:24.822251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 23 23:13:24.822260 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 23 23:13:24.822267 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 23 23:13:24.822274 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 23 23:13:24.822281 kernel: audit: initializing netlink subsys (disabled) Apr 23 23:13:24.822288 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Apr 23 23:13:24.822295 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 23 23:13:24.822301 kernel: cpuidle: using governor menu Apr 23 23:13:24.822308 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 23 23:13:24.822315 kernel: ASID allocator initialised with 32768 entries Apr 23 23:13:24.822324 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 23 23:13:24.822331 kernel: Serial: AMBA PL011 UART driver Apr 23 23:13:24.822337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 23 23:13:24.822344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 23 23:13:24.822351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 23 23:13:24.822358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 23 23:13:24.822365 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 23 23:13:24.822372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 23 23:13:24.822379 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 23 23:13:24.822388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 23 23:13:24.822394 kernel: ACPI: Added _OSI(Module Device) Apr 23 23:13:24.822401 kernel: ACPI: Added _OSI(Processor Device) Apr 23 23:13:24.822408 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 23 23:13:24.822415 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 23 23:13:24.822422 kernel: ACPI: Interpreter enabled Apr 23 23:13:24.822428 kernel: ACPI: Using GIC for interrupt routing Apr 23 23:13:24.822436 kernel: ACPI: MCFG table detected, 1 entries Apr 23 23:13:24.822442 kernel: ACPI: CPU0 has been hot-added Apr 23 23:13:24.822459 kernel: ACPI: CPU1 has been hot-added Apr 23 23:13:24.822466 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 23 23:13:24.822473 kernel: printk: legacy console [ttyAMA0] enabled Apr 23 23:13:24.822479 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 23 23:13:24.822631 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 23 23:13:24.822697 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 23 23:13:24.822755 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 23 23:13:24.822814 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 23 23:13:24.822870 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 23 23:13:24.822880 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 23 23:13:24.822887 kernel: PCI host bridge to bus 0000:00 Apr 23 23:13:24.822952 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 23 23:13:24.823005 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 23 23:13:24.823084 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 23 23:13:24.823137 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 23 23:13:24.823223 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Apr 23 23:13:24.823294 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Apr 23 23:13:24.823355 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Apr 23 23:13:24.823413 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Apr 23 23:13:24.823496 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.823558 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Apr 23 23:13:24.823620 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:13:24.823678 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Apr 23 23:13:24.823736 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Apr 23 23:13:24.823801 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.823860 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Apr 23 23:13:24.823917 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:13:24.823974 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Apr 23 23:13:24.824056 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.824116 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Apr 23 23:13:24.824174 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:13:24.824231 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Apr 23 23:13:24.824289 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Apr 23 23:13:24.824357 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.824415 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Apr 23 23:13:24.824520 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:13:24.824583 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Apr 23 23:13:24.824641 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Apr 23 23:13:24.824706 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.824765 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Apr 23 23:13:24.824825 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:13:24.824883 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 23 23:13:24.824943 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Apr 23 23:13:24.825009 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.825078 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Apr 23 23:13:24.825138 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:13:24.825195 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Apr 23 23:13:24.825253 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Apr 23 23:13:24.825316 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.825377 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Apr 23 23:13:24.825434 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:13:24.825503 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Apr 23 23:13:24.825561 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Apr 23 23:13:24.825628 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.825687 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Apr 23 23:13:24.825749 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:13:24.825806 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Apr 23 23:13:24.825870 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Apr 23 23:13:24.825930 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Apr 23 23:13:24.825990 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:13:24.826070 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Apr 23 23:13:24.826147 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Apr 23 23:13:24.826218 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Apr 23 23:13:24.826296 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 23 23:13:24.826365 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Apr 23 23:13:24.826425 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Apr 23 23:13:24.826495 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 23 23:13:24.826569 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Apr 23 23:13:24.826629 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Apr 23 23:13:24.826733 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Apr 23 23:13:24.826797 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Apr 23 23:13:24.826857 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Apr 23 23:13:24.826924 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Apr 23 23:13:24.826984 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Apr 23 23:13:24.827091 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Apr 23 23:13:24.827159 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Apr 23 23:13:24.827219 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Apr 23 23:13:24.827286 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Apr 23 23:13:24.827347 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Apr 23 23:13:24.827406 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Apr 23 23:13:24.827488 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Apr 23 23:13:24.827551 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Apr 23 23:13:24.827617 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Apr 23 23:13:24.827677 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Apr 23 23:13:24.827739 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Apr 23 23:13:24.827798 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Apr 23 23:13:24.827856 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Apr 23 23:13:24.827917 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Apr 23 23:13:24.827976 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Apr 23 23:13:24.828054 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Apr 23 23:13:24.828116 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 23 23:13:24.829625 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Apr 23 23:13:24.829704 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Apr 23 23:13:24.829770 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 23 23:13:24.829830 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Apr 23 23:13:24.829894 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Apr 23 23:13:24.829956 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 23 23:13:24.830015 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Apr 23 23:13:24.830102 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Apr 23 23:13:24.830173 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 23 23:13:24.830233 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Apr 23 23:13:24.830291 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Apr 23 23:13:24.830357 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 23 23:13:24.830415 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Apr 23 23:13:24.830534 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Apr 23 23:13:24.830603 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 23 23:13:24.830662 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Apr 23 23:13:24.830721 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Apr 23 23:13:24.830783 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 23 23:13:24.830846 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Apr 23 23:13:24.830903 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Apr 23 23:13:24.830965 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Apr 23 23:13:24.831059 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Apr 23 23:13:24.831128 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Apr 23 23:13:24.831189 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Apr 23 23:13:24.831247 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Apr 23 23:13:24.831309 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Apr 23 23:13:24.831368 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Apr 23 23:13:24.831426 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Apr 23 23:13:24.831500 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Apr 23 23:13:24.831559 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Apr 23 23:13:24.831617 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Apr 23 23:13:24.831675 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Apr 23 23:13:24.831734 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Apr 23 23:13:24.831798 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Apr 23 23:13:24.831857 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Apr 23 23:13:24.831916 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Apr 23 23:13:24.831975 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Apr 23 23:13:24.832079 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Apr 23 23:13:24.832148 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Apr 23 23:13:24.832208 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Apr 23 23:13:24.832267 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Apr 23 23:13:24.832328 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Apr 23 23:13:24.832386 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Apr 23 23:13:24.832443 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Apr 23 23:13:24.832556 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Apr 23 23:13:24.832620 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Apr 23 23:13:24.832679 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Apr 23 23:13:24.832737 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Apr 23 23:13:24.832795 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Apr 23 23:13:24.832852 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Apr 23 23:13:24.832911 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Apr 23 23:13:24.832968 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Apr 23 23:13:24.833074 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Apr 23 23:13:24.835717 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Apr 23 23:13:24.835798 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Apr 23 23:13:24.835860 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Apr 23 23:13:24.835924 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Apr 23 23:13:24.835984 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Apr 23 23:13:24.836085 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Apr 23 23:13:24.836159 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Apr 23 23:13:24.836221 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Apr 23 23:13:24.836291 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Apr 23 23:13:24.836352 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Apr 23 23:13:24.836411 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 23 23:13:24.836486 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Apr 23 23:13:24.836549 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Apr 23 23:13:24.836615 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Apr 23 23:13:24.836676 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Apr 23 23:13:24.836753 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 23 23:13:24.836815 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Apr 23 23:13:24.836874 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Apr 23 23:13:24.836942 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Apr 23 23:13:24.837003 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Apr 23 23:13:24.837109 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Apr 23 23:13:24.837171 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 23 23:13:24.837233 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Apr 23 23:13:24.837291 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Apr 23 23:13:24.837356 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Apr 23 23:13:24.837417 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Apr 23 23:13:24.837489 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 23 23:13:24.837549 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Apr 23 23:13:24.837607 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Apr 23 23:13:24.837675 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Apr 23 23:13:24.837749 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Apr 23 23:13:24.837811 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Apr 23 23:13:24.837869 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 23 23:13:24.837927 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Apr 23 23:13:24.837985 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Apr 23 23:13:24.838111 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Apr 23 23:13:24.838181 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Apr 23 23:13:24.838242 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Apr 23 23:13:24.838314 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 23 23:13:24.838373 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Apr 23 23:13:24.838431 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 23 23:13:24.838545 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Apr 23 23:13:24.838611 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Apr 23 23:13:24.838676 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Apr 23 23:13:24.838735 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Apr 23 23:13:24.838796 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 23 23:13:24.838855 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Apr 23 23:13:24.838916 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 23 23:13:24.838976 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Apr 23 23:13:24.839071 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 23 23:13:24.839133 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Apr 23 23:13:24.839192 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 23 23:13:24.839252 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Apr 23 23:13:24.839310 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Apr 23 23:13:24.839371 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Apr 23 23:13:24.839429 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Apr 23 23:13:24.839506 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 23 23:13:24.839561 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 23 23:13:24.839614 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 23 23:13:24.839678 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 23 23:13:24.839733 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Apr 23 23:13:24.839790 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Apr 23 23:13:24.839850 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Apr 23 23:13:24.839905 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Apr 23 23:13:24.839958 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Apr 23 23:13:24.840029 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Apr 23 23:13:24.840089 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Apr 23 23:13:24.840147 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Apr 23 23:13:24.840209 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Apr 23 23:13:24.840263 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Apr 23 23:13:24.840316 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Apr 23 23:13:24.840378 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Apr 23 23:13:24.840431 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Apr 23 23:13:24.840496 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Apr 23 23:13:24.840563 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Apr 23 23:13:24.840622 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Apr 23 23:13:24.840677 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Apr 23 23:13:24.840738 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Apr 23 23:13:24.840792 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Apr 23 23:13:24.840846 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Apr 23 23:13:24.840907 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Apr 23 23:13:24.840964 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Apr 23 23:13:24.841018 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Apr 23 23:13:24.841107 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Apr 23 23:13:24.841163 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Apr 23 23:13:24.841216 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Apr 23 23:13:24.841226 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 23 23:13:24.841234 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 23 23:13:24.841244 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 23 23:13:24.841252 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 23 23:13:24.841259 kernel: iommu: Default domain type: Translated Apr 23 23:13:24.841267 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 23 23:13:24.841274 kernel: efivars: Registered efivars operations Apr 23 23:13:24.841282 kernel: vgaarb: loaded Apr 23 23:13:24.841289 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 23 23:13:24.841296 kernel: VFS: Disk quotas dquot_6.6.0 Apr 23 23:13:24.841304 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 23 23:13:24.841313 kernel: pnp: PnP ACPI init Apr 23 23:13:24.841387 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 23 23:13:24.841398 kernel: pnp: PnP ACPI: found 1 devices Apr 23 23:13:24.841406 kernel: NET: Registered PF_INET protocol family Apr 23 23:13:24.841414 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 23 23:13:24.841421 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 23 23:13:24.841429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 23 23:13:24.841436 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 23 23:13:24.841446 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 23 23:13:24.841496 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 23 23:13:24.841504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:13:24.841512 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 23 23:13:24.841520 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 23 23:13:24.841603 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Apr 23 23:13:24.841615 kernel: PCI: CLS 0 bytes, default 64 Apr 23 23:13:24.841623 kernel: kvm [1]: HYP mode not available Apr 23 23:13:24.841630 kernel: Initialise system trusted keyrings Apr 23 23:13:24.841642 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 23 23:13:24.841649 kernel: Key type asymmetric registered Apr 23 23:13:24.841657 kernel: Asymmetric key parser 'x509' registered Apr 23 23:13:24.841665 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 23 23:13:24.841673 kernel: io scheduler mq-deadline registered Apr 23 23:13:24.841681 kernel: io scheduler kyber registered Apr 23 23:13:24.841689 kernel: io scheduler bfq registered Apr 23 23:13:24.841697 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 23 23:13:24.841765 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Apr 23 23:13:24.841830 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Apr 23 23:13:24.841901 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.841964 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Apr 23 23:13:24.842042 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Apr 23 23:13:24.842106 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.842167 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Apr 23 23:13:24.842226 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Apr 23 23:13:24.842285 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.842349 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Apr 23 23:13:24.842410 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Apr 23 23:13:24.842489 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.842557 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Apr 23 23:13:24.842616 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Apr 23 23:13:24.842675 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.842737 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Apr 23 23:13:24.842801 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Apr 23 23:13:24.842860 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.842919 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Apr 23 23:13:24.842980 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Apr 23 23:13:24.843064 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.843130 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Apr 23 23:13:24.843189 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Apr 23 23:13:24.843247 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.843260 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Apr 23 23:13:24.843320 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Apr 23 23:13:24.843378 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Apr 23 23:13:24.843436 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 23 23:13:24.843481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 23 23:13:24.843491 kernel: ACPI: button: Power Button [PWRB] Apr 23 23:13:24.843499 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 23 23:13:24.843575 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Apr 23 23:13:24.843647 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Apr 23 23:13:24.843658 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 23 23:13:24.843666 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 23 23:13:24.843728 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Apr 23 23:13:24.843739 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Apr 23 23:13:24.843747 kernel: thunder_xcv, ver 1.0 Apr 23 23:13:24.843758 kernel: thunder_bgx, ver 1.0 Apr 23 23:13:24.843766 kernel: nicpf, ver 1.0 Apr 23 23:13:24.843774 kernel: nicvf, ver 1.0 Apr 23 23:13:24.843851 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 23 23:13:24.843910 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-23T23:13:24 UTC (1776986004) Apr 23 23:13:24.843920 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 23 23:13:24.843928 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Apr 23 23:13:24.843936 kernel: NET: Registered PF_INET6 protocol family Apr 23 23:13:24.843944 kernel: watchdog: NMI not fully supported Apr 23 23:13:24.843952 kernel: watchdog: Hard watchdog permanently disabled Apr 23 23:13:24.843960 kernel: Segment Routing with IPv6 Apr 23 23:13:24.843969 kernel: In-situ OAM (IOAM) with IPv6 Apr 23 23:13:24.843977 kernel: NET: Registered PF_PACKET protocol family Apr 23 23:13:24.843984 kernel: Key type dns_resolver registered Apr 23 23:13:24.843992 kernel: registered taskstats version 1 Apr 23 23:13:24.844000 kernel: Loading compiled-in X.509 certificates Apr 23 23:13:24.844007 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 1129832e4b4ea3c9ff0dc43e02ec7de2e4d9d907' Apr 23 23:13:24.844015 kernel: Demotion targets for Node 0: null Apr 23 23:13:24.844039 kernel: Key type .fscrypt registered Apr 23 23:13:24.844047 kernel: Key type fscrypt-provisioning registered Apr 23 23:13:24.844058 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 23 23:13:24.844066 kernel: ima: Allocated hash algorithm: sha1 Apr 23 23:13:24.844073 kernel: ima: No architecture policies found Apr 23 23:13:24.844080 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 23 23:13:24.844088 kernel: clk: Disabling unused clocks Apr 23 23:13:24.844095 kernel: PM: genpd: Disabling unused power domains Apr 23 23:13:24.844102 kernel: Warning: unable to open an initial console. Apr 23 23:13:24.844110 kernel: Freeing unused kernel memory: 39552K Apr 23 23:13:24.844117 kernel: Run /init as init process Apr 23 23:13:24.844124 kernel: with arguments: Apr 23 23:13:24.844133 kernel: /init Apr 23 23:13:24.844140 kernel: with environment: Apr 23 23:13:24.844147 kernel: HOME=/ Apr 23 23:13:24.844155 kernel: TERM=linux Apr 23 23:13:24.844163 systemd[1]: Successfully made /usr/ read-only. Apr 23 23:13:24.844174 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:13:24.844182 systemd[1]: Detected virtualization kvm. Apr 23 23:13:24.844191 systemd[1]: Detected architecture arm64. Apr 23 23:13:24.844199 systemd[1]: Running in initrd. Apr 23 23:13:24.844206 systemd[1]: No hostname configured, using default hostname. Apr 23 23:13:24.844214 systemd[1]: Hostname set to . Apr 23 23:13:24.844222 systemd[1]: Initializing machine ID from VM UUID. Apr 23 23:13:24.844230 systemd[1]: Queued start job for default target initrd.target. Apr 23 23:13:24.844237 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:13:24.844245 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:13:24.844255 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 23 23:13:24.844264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:13:24.844272 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 23 23:13:24.844282 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 23 23:13:24.844291 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 23 23:13:24.844298 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 23 23:13:24.844306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:13:24.844316 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:13:24.844324 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:13:24.844331 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:13:24.844339 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:13:24.844347 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:13:24.844355 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:13:24.844362 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:13:24.844370 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 23 23:13:24.844378 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 23 23:13:24.844388 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:13:24.844396 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:13:24.844403 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:13:24.844411 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:13:24.844419 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 23 23:13:24.844427 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:13:24.844434 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 23 23:13:24.844442 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 23 23:13:24.844462 systemd[1]: Starting systemd-fsck-usr.service... Apr 23 23:13:24.844470 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:13:24.844478 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:13:24.844486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:24.844494 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 23 23:13:24.844506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:13:24.844518 systemd[1]: Finished systemd-fsck-usr.service. Apr 23 23:13:24.844528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 23 23:13:24.844567 systemd-journald[245]: Collecting audit messages is disabled. Apr 23 23:13:24.844589 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 23 23:13:24.844597 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:13:24.844605 kernel: Bridge firewalling registered Apr 23 23:13:24.844613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:24.844621 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:13:24.844629 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 23 23:13:24.844638 systemd-journald[245]: Journal started Apr 23 23:13:24.844658 systemd-journald[245]: Runtime Journal (/run/log/journal/300d3793fb7748fdaae78ce01375cbd5) is 8M, max 76.5M, 68.5M free. Apr 23 23:13:24.846352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:13:24.806485 systemd-modules-load[246]: Inserted module 'overlay' Apr 23 23:13:24.835083 systemd-modules-load[246]: Inserted module 'br_netfilter' Apr 23 23:13:24.850577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:13:24.852517 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:13:24.859237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:13:24.863897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:13:24.875181 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:13:24.876227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:13:24.881252 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 23 23:13:24.882072 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 23 23:13:24.885175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:13:24.894205 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:13:24.907128 dracut-cmdline[283]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=8669c84e6bfac0c003f3ced682d9b5c0fda27fc2948639441be65941607b4c3d Apr 23 23:13:24.942160 systemd-resolved[285]: Positive Trust Anchors: Apr 23 23:13:24.942180 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:13:24.942214 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:13:24.948185 systemd-resolved[285]: Defaulting to hostname 'linux'. Apr 23 23:13:24.950730 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:13:24.951504 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:13:25.021077 kernel: SCSI subsystem initialized Apr 23 23:13:25.025066 kernel: Loading iSCSI transport class v2.0-870. Apr 23 23:13:25.033423 kernel: iscsi: registered transport (tcp) Apr 23 23:13:25.046262 kernel: iscsi: registered transport (qla4xxx) Apr 23 23:13:25.046346 kernel: QLogic iSCSI HBA Driver Apr 23 23:13:25.069718 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:13:25.090415 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:13:25.093957 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:13:25.149606 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 23 23:13:25.151949 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 23 23:13:25.222105 kernel: raid6: neonx8 gen() 15719 MB/s Apr 23 23:13:25.239067 kernel: raid6: neonx4 gen() 15754 MB/s Apr 23 23:13:25.256079 kernel: raid6: neonx2 gen() 13186 MB/s Apr 23 23:13:25.273081 kernel: raid6: neonx1 gen() 10394 MB/s Apr 23 23:13:25.290100 kernel: raid6: int64x8 gen() 6873 MB/s Apr 23 23:13:25.307086 kernel: raid6: int64x4 gen() 7319 MB/s Apr 23 23:13:25.324089 kernel: raid6: int64x2 gen() 6083 MB/s Apr 23 23:13:25.341095 kernel: raid6: int64x1 gen() 5039 MB/s Apr 23 23:13:25.341179 kernel: raid6: using algorithm neonx4 gen() 15754 MB/s Apr 23 23:13:25.358089 kernel: raid6: .... xor() 12299 MB/s, rmw enabled Apr 23 23:13:25.358148 kernel: raid6: using neon recovery algorithm Apr 23 23:13:25.363056 kernel: xor: measuring software checksum speed Apr 23 23:13:25.363112 kernel: 8regs : 21647 MB/sec Apr 23 23:13:25.363130 kernel: 32regs : 19571 MB/sec Apr 23 23:13:25.364162 kernel: arm64_neon : 26493 MB/sec Apr 23 23:13:25.364191 kernel: xor: using function: arm64_neon (26493 MB/sec) Apr 23 23:13:25.418109 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 23 23:13:25.427061 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:13:25.429861 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:13:25.461622 systemd-udevd[493]: Using default interface naming scheme 'v255'. Apr 23 23:13:25.465979 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:13:25.471180 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 23 23:13:25.501348 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Apr 23 23:13:25.530878 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:13:25.533182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:13:25.596772 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:13:25.601779 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 23 23:13:25.696058 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Apr 23 23:13:25.706046 kernel: ACPI: bus type USB registered Apr 23 23:13:25.706102 kernel: usbcore: registered new interface driver usbfs Apr 23 23:13:25.706112 kernel: usbcore: registered new interface driver hub Apr 23 23:13:25.706122 kernel: usbcore: registered new device driver usb Apr 23 23:13:25.709623 kernel: scsi host0: Virtio SCSI HBA Apr 23 23:13:25.720039 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 23 23:13:25.721038 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 23 23:13:25.750306 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 23 23:13:25.750562 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Apr 23 23:13:25.750650 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 23 23:13:25.750942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:13:25.752161 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Apr 23 23:13:25.753183 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Apr 23 23:13:25.753271 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Apr 23 23:13:25.751849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:25.757362 kernel: hub 1-0:1.0: USB hub found Apr 23 23:13:25.757557 kernel: hub 1-0:1.0: 4 ports detected Apr 23 23:13:25.757635 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 23 23:13:25.758148 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:25.760870 kernel: hub 2-0:1.0: USB hub found Apr 23 23:13:25.761005 kernel: hub 2-0:1.0: 4 ports detected Apr 23 23:13:25.761369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:25.770403 kernel: sd 0:0:0:1: Power-on or device reset occurred Apr 23 23:13:25.771498 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Apr 23 23:13:25.772044 kernel: sd 0:0:0:1: [sda] Write Protect is off Apr 23 23:13:25.772188 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Apr 23 23:13:25.772274 kernel: sr 0:0:0:0: Power-on or device reset occurred Apr 23 23:13:25.773755 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 23 23:13:25.773927 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Apr 23 23:13:25.775117 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 23 23:13:25.777064 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Apr 23 23:13:25.783093 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 23 23:13:25.783152 kernel: GPT:17805311 != 80003071 Apr 23 23:13:25.783163 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 23 23:13:25.785124 kernel: GPT:17805311 != 80003071 Apr 23 23:13:25.785155 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 23 23:13:25.785175 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:25.786094 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Apr 23 23:13:25.792974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:25.853771 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 23 23:13:25.865358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 23 23:13:25.873925 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 23 23:13:25.883418 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 23 23:13:25.884199 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 23 23:13:25.888406 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 23 23:13:25.894515 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:13:25.896099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:13:25.897522 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:13:25.899708 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 23 23:13:25.904251 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 23 23:13:25.920392 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:13:25.924084 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:25.924138 disk-uuid[601]: Primary Header is updated. Apr 23 23:13:25.924138 disk-uuid[601]: Secondary Entries is updated. Apr 23 23:13:25.924138 disk-uuid[601]: Secondary Header is updated. Apr 23 23:13:25.998072 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 23 23:13:26.129740 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Apr 23 23:13:26.129795 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Apr 23 23:13:26.129983 kernel: usbcore: registered new interface driver usbhid Apr 23 23:13:26.131084 kernel: usbhid: USB HID core driver Apr 23 23:13:26.237087 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Apr 23 23:13:26.364086 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Apr 23 23:13:26.417067 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Apr 23 23:13:26.953207 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 23 23:13:26.953926 disk-uuid[609]: The operation has completed successfully. Apr 23 23:13:27.000907 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 23 23:13:27.001762 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 23 23:13:27.033311 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 23 23:13:27.050285 sh[627]: Success Apr 23 23:13:27.067694 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 23 23:13:27.067762 kernel: device-mapper: uevent: version 1.0.3 Apr 23 23:13:27.067775 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 23 23:13:27.079055 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 23 23:13:27.123341 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 23 23:13:27.132222 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 23 23:13:27.139068 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 23 23:13:27.164053 kernel: BTRFS: device fsid 2db32ba8-c7e9-4b6a-ba75-58982c25581e devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (639) Apr 23 23:13:27.166062 kernel: BTRFS info (device dm-0): first mount of filesystem 2db32ba8-c7e9-4b6a-ba75-58982c25581e Apr 23 23:13:27.166124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:27.173488 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 23 23:13:27.173552 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 23 23:13:27.173563 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 23 23:13:27.175675 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 23 23:13:27.177470 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:13:27.179070 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 23 23:13:27.181400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 23 23:13:27.183486 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 23 23:13:27.226066 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (668) Apr 23 23:13:27.228217 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:27.228274 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:27.234508 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:13:27.234569 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:13:27.235317 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:13:27.242050 kernel: BTRFS info (device sda6): last unmount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:27.244638 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 23 23:13:27.249262 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 23 23:13:27.337908 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:13:27.343344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:13:27.391312 ignition[733]: Ignition 2.22.0 Apr 23 23:13:27.391324 ignition[733]: Stage: fetch-offline Apr 23 23:13:27.392396 systemd-networkd[814]: lo: Link UP Apr 23 23:13:27.391355 ignition[733]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:27.392417 systemd-networkd[814]: lo: Gained carrier Apr 23 23:13:27.391363 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:27.393949 systemd-networkd[814]: Enumeration completed Apr 23 23:13:27.391456 ignition[733]: parsed url from cmdline: "" Apr 23 23:13:27.394167 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:13:27.391459 ignition[733]: no config URL provided Apr 23 23:13:27.394973 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:27.391463 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:13:27.394978 systemd-networkd[814]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:27.391470 ignition[733]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:13:27.396333 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:27.391477 ignition[733]: failed to fetch config: resource requires networking Apr 23 23:13:27.396337 systemd-networkd[814]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:27.391639 ignition[733]: Ignition finished successfully Apr 23 23:13:27.397409 systemd-networkd[814]: eth0: Link UP Apr 23 23:13:27.397588 systemd-networkd[814]: eth1: Link UP Apr 23 23:13:27.397656 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:13:27.397728 systemd-networkd[814]: eth0: Gained carrier Apr 23 23:13:27.397741 systemd-networkd[814]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:27.399620 systemd[1]: Reached target network.target - Network. Apr 23 23:13:27.403210 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 23 23:13:27.404845 systemd-networkd[814]: eth1: Gained carrier Apr 23 23:13:27.404866 systemd-networkd[814]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:27.442378 ignition[818]: Ignition 2.22.0 Apr 23 23:13:27.442434 ignition[818]: Stage: fetch Apr 23 23:13:27.442607 ignition[818]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:27.442617 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:27.442697 ignition[818]: parsed url from cmdline: "" Apr 23 23:13:27.442700 ignition[818]: no config URL provided Apr 23 23:13:27.442705 ignition[818]: reading system config file "/usr/lib/ignition/user.ign" Apr 23 23:13:27.442712 ignition[818]: no config at "/usr/lib/ignition/user.ign" Apr 23 23:13:27.442742 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Apr 23 23:13:27.443172 ignition[818]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 23 23:13:27.448133 systemd-networkd[814]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 23 23:13:27.455200 systemd-networkd[814]: eth0: DHCPv4 address 49.13.208.85/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 23 23:13:27.643497 ignition[818]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Apr 23 23:13:27.653585 ignition[818]: GET result: OK Apr 23 23:13:27.654142 ignition[818]: parsing config with SHA512: c6ef1be3d5b44fb7cb39b469b659c919dbeb82566d553a7bbd0d2d66cd9b834c7711db71f41ff8fa41442ff4d8c887695588b154f00056671af468a65daa1c60 Apr 23 23:13:27.664345 unknown[818]: fetched base config from "system" Apr 23 23:13:27.664356 unknown[818]: fetched base config from "system" Apr 23 23:13:27.664763 ignition[818]: fetch: fetch complete Apr 23 23:13:27.664362 unknown[818]: fetched user config from "hetzner" Apr 23 23:13:27.664768 ignition[818]: fetch: fetch passed Apr 23 23:13:27.664818 ignition[818]: Ignition finished successfully Apr 23 23:13:27.667383 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 23 23:13:27.669639 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 23 23:13:27.707352 ignition[826]: Ignition 2.22.0 Apr 23 23:13:27.707370 ignition[826]: Stage: kargs Apr 23 23:13:27.707550 ignition[826]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:27.707560 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:27.708377 ignition[826]: kargs: kargs passed Apr 23 23:13:27.708753 ignition[826]: Ignition finished successfully Apr 23 23:13:27.711141 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 23 23:13:27.714514 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 23 23:13:27.749362 ignition[833]: Ignition 2.22.0 Apr 23 23:13:27.749382 ignition[833]: Stage: disks Apr 23 23:13:27.749583 ignition[833]: no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:27.749593 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:27.750531 ignition[833]: disks: disks passed Apr 23 23:13:27.750585 ignition[833]: Ignition finished successfully Apr 23 23:13:27.753349 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 23 23:13:27.755223 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 23 23:13:27.756776 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 23 23:13:27.757858 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:13:27.759262 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:13:27.760287 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:13:27.762384 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 23 23:13:27.796585 systemd-fsck[842]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Apr 23 23:13:27.804270 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 23 23:13:27.809161 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 23 23:13:27.887340 kernel: EXT4-fs (sda9): mounted filesystem 753efcb9-de86-4e47-981f-2dbd4690452d r/w with ordered data mode. Quota mode: none. Apr 23 23:13:27.888150 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 23 23:13:27.889533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 23 23:13:27.892078 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:13:27.893687 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 23 23:13:27.897361 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Apr 23 23:13:27.899036 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 23 23:13:27.899076 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:13:27.910552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 23 23:13:27.914001 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 23 23:13:27.921058 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (850) Apr 23 23:13:27.921513 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:27.923986 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:27.928380 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:13:27.928455 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:13:27.928466 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:13:27.931449 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:13:27.970909 coreos-metadata[852]: Apr 23 23:13:27.970 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Apr 23 23:13:27.973040 coreos-metadata[852]: Apr 23 23:13:27.972 INFO Fetch successful Apr 23 23:13:27.973650 coreos-metadata[852]: Apr 23 23:13:27.973 INFO wrote hostname ci-4459-2-4-n-a35467bd0b to /sysroot/etc/hostname Apr 23 23:13:27.976102 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:13:27.981335 initrd-setup-root[880]: cut: /sysroot/etc/passwd: No such file or directory Apr 23 23:13:27.986063 initrd-setup-root[887]: cut: /sysroot/etc/group: No such file or directory Apr 23 23:13:27.990777 initrd-setup-root[894]: cut: /sysroot/etc/shadow: No such file or directory Apr 23 23:13:27.994883 initrd-setup-root[901]: cut: /sysroot/etc/gshadow: No such file or directory Apr 23 23:13:28.096149 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 23 23:13:28.100176 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 23 23:13:28.102442 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 23 23:13:28.125048 kernel: BTRFS info (device sda6): last unmount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:28.149090 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 23 23:13:28.158309 ignition[968]: INFO : Ignition 2.22.0 Apr 23 23:13:28.158309 ignition[968]: INFO : Stage: mount Apr 23 23:13:28.160189 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:28.160189 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:28.160189 ignition[968]: INFO : mount: mount passed Apr 23 23:13:28.160189 ignition[968]: INFO : Ignition finished successfully Apr 23 23:13:28.161146 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 23 23:13:28.164826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 23 23:13:28.166802 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 23 23:13:28.190099 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 23 23:13:28.215079 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (981) Apr 23 23:13:28.217052 kernel: BTRFS info (device sda6): first mount of filesystem a3954155-494f-4049-93fc-7ec9255747d0 Apr 23 23:13:28.217232 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Apr 23 23:13:28.221059 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 23 23:13:28.221135 kernel: BTRFS info (device sda6): turning on async discard Apr 23 23:13:28.221167 kernel: BTRFS info (device sda6): enabling free space tree Apr 23 23:13:28.224228 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 23 23:13:28.264675 ignition[999]: INFO : Ignition 2.22.0 Apr 23 23:13:28.265575 ignition[999]: INFO : Stage: files Apr 23 23:13:28.266224 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:28.267775 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:28.267775 ignition[999]: DEBUG : files: compiled without relabeling support, skipping Apr 23 23:13:28.269753 ignition[999]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 23 23:13:28.270706 ignition[999]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 23 23:13:28.275444 ignition[999]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 23 23:13:28.276582 ignition[999]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 23 23:13:28.277744 unknown[999]: wrote ssh authorized keys file for user: core Apr 23 23:13:28.278764 ignition[999]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 23 23:13:28.283349 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 23 23:13:28.283349 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 23 23:13:28.378335 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 23 23:13:28.463910 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 23 23:13:28.465346 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:13:28.472945 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 23 23:13:28.531141 systemd-networkd[814]: eth0: Gained IPv6LL Apr 23 23:13:28.937684 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 23 23:13:29.298307 systemd-networkd[814]: eth1: Gained IPv6LL Apr 23 23:13:30.768178 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 23 23:13:30.768178 ignition[999]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 23 23:13:30.771014 ignition[999]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:13:30.771014 ignition[999]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 23 23:13:30.771014 ignition[999]: INFO : files: files passed Apr 23 23:13:30.771014 ignition[999]: INFO : Ignition finished successfully Apr 23 23:13:30.773465 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 23 23:13:30.775840 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 23 23:13:30.780140 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 23 23:13:30.794760 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 23 23:13:30.794857 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 23 23:13:30.803365 initrd-setup-root-after-ignition[1027]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:13:30.803365 initrd-setup-root-after-ignition[1027]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:13:30.806058 initrd-setup-root-after-ignition[1031]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 23 23:13:30.807974 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:13:30.811174 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 23 23:13:30.812794 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 23 23:13:30.871373 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 23 23:13:30.871621 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 23 23:13:30.875498 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 23 23:13:30.877649 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 23 23:13:30.879183 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 23 23:13:30.879979 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 23 23:13:30.900403 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:13:30.903995 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 23 23:13:30.922862 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:13:30.924482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:13:30.925257 systemd[1]: Stopped target timers.target - Timer Units. Apr 23 23:13:30.927301 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 23 23:13:30.928103 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 23 23:13:30.929896 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 23 23:13:30.931249 systemd[1]: Stopped target basic.target - Basic System. Apr 23 23:13:30.931852 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 23 23:13:30.933726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 23 23:13:30.935323 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 23 23:13:30.936676 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 23 23:13:30.937851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 23 23:13:30.939123 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 23 23:13:30.940434 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 23 23:13:30.941673 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 23 23:13:30.942727 systemd[1]: Stopped target swap.target - Swaps. Apr 23 23:13:30.943945 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 23 23:13:30.944103 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 23 23:13:30.945608 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:13:30.946339 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:13:30.947594 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 23 23:13:30.947674 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:13:30.948833 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 23 23:13:30.948949 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 23 23:13:30.950739 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 23 23:13:30.950854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 23 23:13:30.952512 systemd[1]: ignition-files.service: Deactivated successfully. Apr 23 23:13:30.952606 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 23 23:13:30.953756 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Apr 23 23:13:30.953846 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Apr 23 23:13:30.957207 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 23 23:13:30.960519 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 23 23:13:30.961142 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 23 23:13:30.961267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:13:30.963944 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 23 23:13:30.964176 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 23 23:13:30.975270 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 23 23:13:30.977210 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 23 23:13:30.987158 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 23 23:13:30.992662 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 23 23:13:30.992811 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 23 23:13:30.999133 ignition[1051]: INFO : Ignition 2.22.0 Apr 23 23:13:30.999133 ignition[1051]: INFO : Stage: umount Apr 23 23:13:31.000366 ignition[1051]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 23 23:13:31.000366 ignition[1051]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Apr 23 23:13:31.000366 ignition[1051]: INFO : umount: umount passed Apr 23 23:13:31.003776 ignition[1051]: INFO : Ignition finished successfully Apr 23 23:13:31.004564 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 23 23:13:31.004701 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 23 23:13:31.006946 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 23 23:13:31.007101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 23 23:13:31.009160 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 23 23:13:31.009255 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 23 23:13:31.010801 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 23 23:13:31.010844 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 23 23:13:31.011791 systemd[1]: Stopped target network.target - Network. Apr 23 23:13:31.012705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 23 23:13:31.012754 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 23 23:13:31.013756 systemd[1]: Stopped target paths.target - Path Units. Apr 23 23:13:31.014656 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 23 23:13:31.018107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:13:31.020366 systemd[1]: Stopped target slices.target - Slice Units. Apr 23 23:13:31.021312 systemd[1]: Stopped target sockets.target - Socket Units. Apr 23 23:13:31.022502 systemd[1]: iscsid.socket: Deactivated successfully. Apr 23 23:13:31.022552 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 23 23:13:31.023704 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 23 23:13:31.023743 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 23 23:13:31.024825 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 23 23:13:31.024886 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 23 23:13:31.026035 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 23 23:13:31.026079 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 23 23:13:31.027156 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 23 23:13:31.027200 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 23 23:13:31.028434 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 23 23:13:31.029439 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 23 23:13:31.035092 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 23 23:13:31.035686 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 23 23:13:31.040463 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 23 23:13:31.040833 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 23 23:13:31.040879 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:13:31.043124 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 23 23:13:31.044017 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 23 23:13:31.044506 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 23 23:13:31.048940 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 23 23:13:31.049174 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 23 23:13:31.050492 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 23 23:13:31.050528 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:13:31.052966 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 23 23:13:31.053524 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 23 23:13:31.053577 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 23 23:13:31.056879 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 23 23:13:31.056926 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:13:31.060390 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 23 23:13:31.060436 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 23 23:13:31.062427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:13:31.065789 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 23 23:13:31.085510 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 23 23:13:31.085707 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:13:31.089405 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 23 23:13:31.089507 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 23 23:13:31.091391 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 23 23:13:31.091461 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 23 23:13:31.093046 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 23 23:13:31.093082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:13:31.094184 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 23 23:13:31.094235 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 23 23:13:31.095810 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 23 23:13:31.095853 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 23 23:13:31.097602 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 23 23:13:31.097659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 23 23:13:31.100264 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 23 23:13:31.102762 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 23 23:13:31.102845 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:13:31.106849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 23 23:13:31.106927 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:13:31.110228 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 23 23:13:31.110318 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:13:31.113043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 23 23:13:31.115146 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:13:31.116122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:13:31.116174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:31.123742 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 23 23:13:31.125066 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 23 23:13:31.125984 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 23 23:13:31.128512 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 23 23:13:31.152640 systemd[1]: Switching root. Apr 23 23:13:31.183036 systemd-journald[245]: Journal stopped Apr 23 23:13:32.212398 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Apr 23 23:13:32.212480 kernel: SELinux: policy capability network_peer_controls=1 Apr 23 23:13:32.212493 kernel: SELinux: policy capability open_perms=1 Apr 23 23:13:32.212502 kernel: SELinux: policy capability extended_socket_class=1 Apr 23 23:13:32.212511 kernel: SELinux: policy capability always_check_network=0 Apr 23 23:13:32.212520 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 23 23:13:32.212533 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 23 23:13:32.212542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 23 23:13:32.212551 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 23 23:13:32.212562 kernel: SELinux: policy capability userspace_initial_context=0 Apr 23 23:13:32.212570 kernel: audit: type=1403 audit(1776986011.369:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 23 23:13:32.212581 systemd[1]: Successfully loaded SELinux policy in 51.255ms. Apr 23 23:13:32.212601 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.952ms. Apr 23 23:13:32.212613 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 23 23:13:32.212624 systemd[1]: Detected virtualization kvm. Apr 23 23:13:32.212634 systemd[1]: Detected architecture arm64. Apr 23 23:13:32.212643 systemd[1]: Detected first boot. Apr 23 23:13:32.212659 systemd[1]: Hostname set to . Apr 23 23:13:32.212669 systemd[1]: Initializing machine ID from VM UUID. Apr 23 23:13:32.212679 zram_generator::config[1095]: No configuration found. Apr 23 23:13:32.212690 kernel: NET: Registered PF_VSOCK protocol family Apr 23 23:13:32.212699 systemd[1]: Populated /etc with preset unit settings. Apr 23 23:13:32.212710 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 23 23:13:32.212721 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 23 23:13:32.212736 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 23 23:13:32.212747 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 23 23:13:32.212757 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 23 23:13:32.212767 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 23 23:13:32.212777 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 23 23:13:32.212786 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 23 23:13:32.212796 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 23 23:13:32.212808 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 23 23:13:32.212822 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 23 23:13:32.212834 systemd[1]: Created slice user.slice - User and Session Slice. Apr 23 23:13:32.212846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 23 23:13:32.212857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 23 23:13:32.212867 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 23 23:13:32.212877 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 23 23:13:32.212887 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 23 23:13:32.212898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 23 23:13:32.212909 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 23 23:13:32.212920 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 23 23:13:32.212930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 23 23:13:32.212940 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 23 23:13:32.212950 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 23 23:13:32.212959 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 23 23:13:32.212971 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 23 23:13:32.212982 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 23 23:13:32.212992 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 23 23:13:32.213002 systemd[1]: Reached target slices.target - Slice Units. Apr 23 23:13:32.213012 systemd[1]: Reached target swap.target - Swaps. Apr 23 23:13:32.215052 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 23 23:13:32.215078 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 23 23:13:32.215088 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 23 23:13:32.215098 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 23 23:13:32.215114 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 23 23:13:32.215124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 23 23:13:32.215134 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 23 23:13:32.215144 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 23 23:13:32.215153 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 23 23:13:32.215163 systemd[1]: Mounting media.mount - External Media Directory... Apr 23 23:13:32.215172 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 23 23:13:32.215183 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 23 23:13:32.215198 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 23 23:13:32.215210 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 23 23:13:32.215221 systemd[1]: Reached target machines.target - Containers. Apr 23 23:13:32.215230 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 23 23:13:32.215240 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:32.215250 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 23 23:13:32.215260 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 23 23:13:32.215286 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:13:32.215297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:13:32.215352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:13:32.215365 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 23 23:13:32.215374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:13:32.215384 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 23 23:13:32.215399 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 23 23:13:32.215411 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 23 23:13:32.215421 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 23 23:13:32.215431 systemd[1]: Stopped systemd-fsck-usr.service. Apr 23 23:13:32.215443 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:32.215454 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 23 23:13:32.215464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 23 23:13:32.215475 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 23 23:13:32.215484 kernel: fuse: init (API version 7.41) Apr 23 23:13:32.215495 kernel: loop: module loaded Apr 23 23:13:32.215505 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 23 23:13:32.215515 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 23 23:13:32.215526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 23 23:13:32.215536 systemd[1]: verity-setup.service: Deactivated successfully. Apr 23 23:13:32.215546 systemd[1]: Stopped verity-setup.service. Apr 23 23:13:32.215558 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 23 23:13:32.215569 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 23 23:13:32.215579 systemd[1]: Mounted media.mount - External Media Directory. Apr 23 23:13:32.215590 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 23 23:13:32.215600 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 23 23:13:32.215609 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 23 23:13:32.215620 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 23 23:13:32.215629 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 23 23:13:32.215641 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 23 23:13:32.215652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:13:32.215662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:13:32.215672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:13:32.215682 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:13:32.215691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 23 23:13:32.215701 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 23 23:13:32.215711 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:13:32.215721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:13:32.215732 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 23 23:13:32.215742 kernel: ACPI: bus type drm_connector registered Apr 23 23:13:32.215752 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 23 23:13:32.215762 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:13:32.215772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:13:32.215782 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 23 23:13:32.215791 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 23 23:13:32.215802 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 23 23:13:32.215812 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 23 23:13:32.215824 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 23 23:13:32.215834 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 23 23:13:32.215881 systemd-journald[1163]: Collecting audit messages is disabled. Apr 23 23:13:32.215911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 23 23:13:32.215923 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 23 23:13:32.215934 systemd-journald[1163]: Journal started Apr 23 23:13:32.215955 systemd-journald[1163]: Runtime Journal (/run/log/journal/300d3793fb7748fdaae78ce01375cbd5) is 8M, max 76.5M, 68.5M free. Apr 23 23:13:32.221086 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 23 23:13:31.907189 systemd[1]: Queued start job for default target multi-user.target. Apr 23 23:13:31.917682 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 23 23:13:31.918247 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 23 23:13:32.228041 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 23 23:13:32.228106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:32.232520 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 23 23:13:32.234046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:13:32.238045 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 23 23:13:32.238111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:13:32.243065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 23 23:13:32.255229 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 23 23:13:32.264327 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 23 23:13:32.264409 systemd[1]: Started systemd-journald.service - Journal Service. Apr 23 23:13:32.266967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 23 23:13:32.269368 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 23 23:13:32.297017 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 23 23:13:32.302827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 23 23:13:32.307105 kernel: loop0: detected capacity change from 0 to 119840 Apr 23 23:13:32.307510 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 23 23:13:32.312049 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 23 23:13:32.335091 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 23 23:13:32.351162 systemd-journald[1163]: Time spent on flushing to /var/log/journal/300d3793fb7748fdaae78ce01375cbd5 is 43.271ms for 1176 entries. Apr 23 23:13:32.351162 systemd-journald[1163]: System Journal (/var/log/journal/300d3793fb7748fdaae78ce01375cbd5) is 8M, max 584.8M, 576.8M free. Apr 23 23:13:32.416288 systemd-journald[1163]: Received client request to flush runtime journal. Apr 23 23:13:32.416363 kernel: loop1: detected capacity change from 0 to 100632 Apr 23 23:13:32.416385 kernel: loop2: detected capacity change from 0 to 209336 Apr 23 23:13:32.361127 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 23 23:13:32.370087 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 23 23:13:32.370097 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Apr 23 23:13:32.373441 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 23 23:13:32.384782 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 23 23:13:32.390122 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 23 23:13:32.395266 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 23 23:13:32.420434 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 23 23:13:32.435062 kernel: loop3: detected capacity change from 0 to 8 Apr 23 23:13:32.449083 kernel: loop4: detected capacity change from 0 to 119840 Apr 23 23:13:32.459268 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 23 23:13:32.464142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 23 23:13:32.471069 kernel: loop5: detected capacity change from 0 to 100632 Apr 23 23:13:32.487054 kernel: loop6: detected capacity change from 0 to 209336 Apr 23 23:13:32.498382 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Apr 23 23:13:32.498402 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Apr 23 23:13:32.502108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 23 23:13:32.522057 kernel: loop7: detected capacity change from 0 to 8 Apr 23 23:13:32.523328 (sd-merge)[1240]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Apr 23 23:13:32.524418 (sd-merge)[1240]: Merged extensions into '/usr'. Apr 23 23:13:32.530911 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Apr 23 23:13:32.530938 systemd[1]: Reloading... Apr 23 23:13:32.675061 zram_generator::config[1270]: No configuration found. Apr 23 23:13:32.833974 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 23 23:13:32.888419 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 23 23:13:32.888558 systemd[1]: Reloading finished in 357 ms. Apr 23 23:13:32.910949 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 23 23:13:32.914402 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 23 23:13:32.934281 systemd[1]: Starting ensure-sysext.service... Apr 23 23:13:32.938344 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 23 23:13:32.968725 systemd[1]: Reload requested from client PID 1307 ('systemctl') (unit ensure-sysext.service)... Apr 23 23:13:32.968748 systemd[1]: Reloading... Apr 23 23:13:32.992645 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 23 23:13:32.992683 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 23 23:13:32.992962 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 23 23:13:32.993183 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 23 23:13:32.994134 systemd-tmpfiles[1308]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 23 23:13:32.994484 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Apr 23 23:13:32.994558 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Apr 23 23:13:33.003341 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:13:33.003356 systemd-tmpfiles[1308]: Skipping /boot Apr 23 23:13:33.026948 systemd-tmpfiles[1308]: Detected autofs mount point /boot during canonicalization of boot. Apr 23 23:13:33.031090 systemd-tmpfiles[1308]: Skipping /boot Apr 23 23:13:33.051106 zram_generator::config[1334]: No configuration found. Apr 23 23:13:33.223442 systemd[1]: Reloading finished in 254 ms. Apr 23 23:13:33.238070 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 23 23:13:33.244861 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 23 23:13:33.254356 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:13:33.258501 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 23 23:13:33.262365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 23 23:13:33.270200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 23 23:13:33.273823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 23 23:13:33.277272 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 23 23:13:33.284649 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:33.288393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 23 23:13:33.294446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 23 23:13:33.301231 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 23 23:13:33.302201 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:33.302356 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:33.306647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:33.306792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:33.306871 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:33.309497 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 23 23:13:33.315727 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 23 23:13:33.317234 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 23 23:13:33.317415 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 23 23:13:33.322691 systemd[1]: Finished ensure-sysext.service. Apr 23 23:13:33.325898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 23 23:13:33.326250 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 23 23:13:33.340851 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 23 23:13:33.349379 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 23 23:13:33.352341 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 23 23:13:33.356320 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 23 23:13:33.362260 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 23 23:13:33.374537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 23 23:13:33.374790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 23 23:13:33.376254 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 23 23:13:33.381355 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 23 23:13:33.382567 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 23 23:13:33.382733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 23 23:13:33.385684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 23 23:13:33.385756 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 23 23:13:33.386319 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Apr 23 23:13:33.388716 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 23 23:13:33.420970 augenrules[1415]: No rules Apr 23 23:13:33.423448 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:13:33.425103 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:13:33.427539 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 23 23:13:33.429564 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 23 23:13:33.435650 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 23 23:13:33.439449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 23 23:13:33.451099 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 23 23:13:33.584821 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 23 23:13:33.671052 kernel: mousedev: PS/2 mouse device common for all mice Apr 23 23:13:33.757191 systemd-networkd[1425]: lo: Link UP Apr 23 23:13:33.757630 systemd-networkd[1425]: lo: Gained carrier Apr 23 23:13:33.759410 systemd-networkd[1425]: Enumeration completed Apr 23 23:13:33.759663 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 23 23:13:33.762312 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:33.762428 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:33.763209 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 23 23:13:33.766453 systemd-networkd[1425]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:33.766465 systemd-networkd[1425]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 23 23:13:33.766957 systemd-networkd[1425]: eth0: Link UP Apr 23 23:13:33.767098 systemd-networkd[1425]: eth0: Gained carrier Apr 23 23:13:33.767115 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:33.767902 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 23 23:13:33.770263 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 23 23:13:33.771113 systemd[1]: Reached target time-set.target - System Time Set. Apr 23 23:13:33.772116 systemd-networkd[1425]: eth1: Link UP Apr 23 23:13:33.774587 systemd-networkd[1425]: eth1: Gained carrier Apr 23 23:13:33.774618 systemd-networkd[1425]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 23 23:13:33.776231 systemd-resolved[1377]: Positive Trust Anchors: Apr 23 23:13:33.776251 systemd-resolved[1377]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 23 23:13:33.776320 systemd-resolved[1377]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 23 23:13:33.788162 systemd-resolved[1377]: Using system hostname 'ci-4459-2-4-n-a35467bd0b'. Apr 23 23:13:33.791892 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 23 23:13:33.793133 systemd[1]: Reached target network.target - Network. Apr 23 23:13:33.793928 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 23 23:13:33.795846 systemd[1]: Reached target sysinit.target - System Initialization. Apr 23 23:13:33.797297 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 23 23:13:33.800989 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 23 23:13:33.802657 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 23 23:13:33.803927 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 23 23:13:33.810141 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 23 23:13:33.811224 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 23 23:13:33.811253 systemd[1]: Reached target paths.target - Path Units. Apr 23 23:13:33.811950 systemd[1]: Reached target timers.target - Timer Units. Apr 23 23:13:33.814445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 23 23:13:33.817607 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 23 23:13:33.820571 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 23 23:13:33.822210 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 23 23:13:33.823613 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 23 23:13:33.831125 systemd-networkd[1425]: eth0: DHCPv4 address 49.13.208.85/32, gateway 172.31.1.1 acquired from 172.31.1.1 Apr 23 23:13:33.839266 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 23 23:13:33.841991 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 23 23:13:33.845199 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 23 23:13:33.846164 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 23 23:13:33.848305 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Apr 23 23:13:33.852089 systemd-networkd[1425]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Apr 23 23:13:33.853508 systemd-timesyncd[1393]: Network configuration changed, trying to establish connection. Apr 23 23:13:33.867647 systemd[1]: Reached target sockets.target - Socket Units. Apr 23 23:13:33.868307 systemd[1]: Reached target basic.target - Basic System. Apr 23 23:13:33.868876 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:13:33.868908 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 23 23:13:33.870192 systemd[1]: Starting containerd.service - containerd container runtime... Apr 23 23:13:33.873091 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 23 23:13:33.877333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 23 23:13:33.879599 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 23 23:13:33.885622 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 23 23:13:33.901487 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 23 23:13:33.902202 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 23 23:13:33.906265 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 23 23:13:33.912159 systemd-timesyncd[1393]: Contacted time server 85.215.64.237:123 (0.flatcar.pool.ntp.org). Apr 23 23:13:33.912360 systemd-timesyncd[1393]: Initial clock synchronization to Thu 2026-04-23 23:13:33.766093 UTC. Apr 23 23:13:33.914402 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Apr 23 23:13:33.916074 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Apr 23 23:13:33.916135 kernel: [drm] features: -context_init Apr 23 23:13:33.918198 kernel: [drm] number of scanouts: 1 Apr 23 23:13:33.918321 kernel: [drm] number of cap sets: 0 Apr 23 23:13:33.919050 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Apr 23 23:13:33.927393 kernel: Console: switching to colour frame buffer device 160x50 Apr 23 23:13:33.930005 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 23 23:13:33.939265 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Apr 23 23:13:33.940489 jq[1489]: false Apr 23 23:13:33.946417 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 23 23:13:33.950251 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 23 23:13:33.956490 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 23 23:13:33.959821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 23 23:13:33.960458 coreos-metadata[1486]: Apr 23 23:13:33.960 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Apr 23 23:13:33.963597 coreos-metadata[1486]: Apr 23 23:13:33.963 INFO Fetch successful Apr 23 23:13:33.963597 coreos-metadata[1486]: Apr 23 23:13:33.963 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Apr 23 23:13:33.967044 coreos-metadata[1486]: Apr 23 23:13:33.965 INFO Fetch successful Apr 23 23:13:33.975924 extend-filesystems[1490]: Found /dev/sda6 Apr 23 23:13:33.978173 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Apr 23 23:13:33.981525 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 23 23:13:33.982171 systemd[1]: Starting update-engine.service - Update Engine... Apr 23 23:13:33.986307 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 23 23:13:33.988047 extend-filesystems[1490]: Found /dev/sda9 Apr 23 23:13:33.996092 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 23 23:13:33.997287 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 23 23:13:33.997626 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 23 23:13:34.004327 extend-filesystems[1490]: Checking size of /dev/sda9 Apr 23 23:13:34.003687 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 23 23:13:34.003892 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 23 23:13:34.020837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 23 23:13:34.032308 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 23 23:13:34.056292 extend-filesystems[1490]: Resized partition /dev/sda9 Apr 23 23:13:34.057471 jq[1515]: true Apr 23 23:13:34.061037 extend-filesystems[1541]: resize2fs 1.47.3 (8-Jul-2025) Apr 23 23:13:34.065614 systemd[1]: motdgen.service: Deactivated successfully. Apr 23 23:13:34.066186 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 23 23:13:34.080351 tar[1520]: linux-arm64/LICENSE Apr 23 23:13:34.080351 tar[1520]: linux-arm64/helm Apr 23 23:13:34.073935 (ntainerd)[1534]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 23 23:13:34.098941 dbus-daemon[1487]: [system] SELinux support is enabled Apr 23 23:13:34.104750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 23 23:13:34.108586 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 23 23:13:34.108627 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 23 23:13:34.112838 jq[1542]: true Apr 23 23:13:34.113923 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 23 23:13:34.113967 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 23 23:13:34.122049 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Apr 23 23:13:34.127125 update_engine[1514]: I20260423 23:13:34.126224 1514 main.cc:92] Flatcar Update Engine starting Apr 23 23:13:34.134890 update_engine[1514]: I20260423 23:13:34.134672 1514 update_check_scheduler.cc:74] Next update check in 10m20s Apr 23 23:13:34.144503 systemd[1]: Started update-engine.service - Update Engine. Apr 23 23:13:34.156410 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 23 23:13:34.188082 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 23 23:13:34.220167 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 23 23:13:34.221140 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 23 23:13:34.289651 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Apr 23 23:13:34.304996 extend-filesystems[1541]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 23 23:13:34.304996 extend-filesystems[1541]: old_desc_blocks = 1, new_desc_blocks = 5 Apr 23 23:13:34.304996 extend-filesystems[1541]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Apr 23 23:13:34.317665 extend-filesystems[1490]: Resized filesystem in /dev/sda9 Apr 23 23:13:34.325370 bash[1579]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:13:34.307511 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 23 23:13:34.307750 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 23 23:13:34.315065 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 23 23:13:34.322432 systemd[1]: Starting sshkeys.service... Apr 23 23:13:34.369289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:34.381681 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 23 23:13:34.387430 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 23 23:13:34.419926 coreos-metadata[1591]: Apr 23 23:13:34.419 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Apr 23 23:13:34.421251 coreos-metadata[1591]: Apr 23 23:13:34.420 INFO Fetch successful Apr 23 23:13:34.425966 unknown[1591]: wrote ssh authorized keys file for user: core Apr 23 23:13:34.456505 update-ssh-keys[1597]: Updated "/home/core/.ssh/authorized_keys" Apr 23 23:13:34.457345 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 23 23:13:34.460841 systemd[1]: Finished sshkeys.service. Apr 23 23:13:34.464359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 23 23:13:34.466111 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:34.473944 locksmithd[1554]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 23 23:13:34.480826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 23 23:13:34.516791 containerd[1534]: time="2026-04-23T23:13:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 23 23:13:34.530701 containerd[1534]: time="2026-04-23T23:13:34.530480613Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 23 23:13:34.578590 containerd[1534]: time="2026-04-23T23:13:34.578271165Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.114µs" Apr 23 23:13:34.578590 containerd[1534]: time="2026-04-23T23:13:34.578316640Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 23 23:13:34.578590 containerd[1534]: time="2026-04-23T23:13:34.578337571Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 23 23:13:34.578590 containerd[1534]: time="2026-04-23T23:13:34.578498029Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 23 23:13:34.578590 containerd[1534]: time="2026-04-23T23:13:34.578531997Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 23 23:13:34.578590 containerd[1534]: time="2026-04-23T23:13:34.578562942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578619413Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578631586Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578838854Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578855858Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578865833Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578877064Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 23 23:13:34.579096 containerd[1534]: time="2026-04-23T23:13:34.578941624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.584042 containerd[1534]: time="2026-04-23T23:13:34.583227961Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.584042 containerd[1534]: time="2026-04-23T23:13:34.583309093Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 23 23:13:34.584042 containerd[1534]: time="2026-04-23T23:13:34.583322091Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 23 23:13:34.584042 containerd[1534]: time="2026-04-23T23:13:34.583378522Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 23 23:13:34.584042 containerd[1534]: time="2026-04-23T23:13:34.583723196Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 23 23:13:34.584042 containerd[1534]: time="2026-04-23T23:13:34.583806685Z" level=info msg="metadata content store policy set" policy=shared Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591787675Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591864408Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591879527Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591891858Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591907723Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591918758Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591930932Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591942517Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591953670Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591963566Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591973187Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.591991879Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.592154497Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 23 23:13:34.592293 containerd[1534]: time="2026-04-23T23:13:34.592183321Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592197459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592208454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592218861Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592245604Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592275096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592287466Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592299914Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592310832Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592322063Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592492299Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592506750Z" level=info msg="Start snapshots syncer" Apr 23 23:13:34.592598 containerd[1534]: time="2026-04-23T23:13:34.592528859Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 23 23:13:34.592794 containerd[1534]: time="2026-04-23T23:13:34.592768721Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 23 23:13:34.592867 containerd[1534]: time="2026-04-23T23:13:34.592814628Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 23 23:13:34.592867 containerd[1534]: time="2026-04-23T23:13:34.592857629Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 23 23:13:34.593015 containerd[1534]: time="2026-04-23T23:13:34.592953487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 23 23:13:34.593015 containerd[1534]: time="2026-04-23T23:13:34.592988045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 23 23:13:34.593015 containerd[1534]: time="2026-04-23T23:13:34.593000180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 23 23:13:34.598236 containerd[1534]: time="2026-04-23T23:13:34.598178891Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 23 23:13:34.598356 containerd[1534]: time="2026-04-23T23:13:34.598246357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 23 23:13:34.598356 containerd[1534]: time="2026-04-23T23:13:34.598262615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 23 23:13:34.598356 containerd[1534]: time="2026-04-23T23:13:34.598275299Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 23 23:13:34.598356 containerd[1534]: time="2026-04-23T23:13:34.598313273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 23 23:13:34.598356 containerd[1534]: time="2026-04-23T23:13:34.598324858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 23 23:13:34.598356 containerd[1534]: time="2026-04-23T23:13:34.598338799Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 23 23:13:34.598477 containerd[1534]: time="2026-04-23T23:13:34.598396840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:13:34.598477 containerd[1534]: time="2026-04-23T23:13:34.598412509Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 23 23:13:34.598477 containerd[1534]: time="2026-04-23T23:13:34.598421659Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:13:34.598477 containerd[1534]: time="2026-04-23T23:13:34.598431162Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 23 23:13:34.598548 containerd[1534]: time="2026-04-23T23:13:34.598486258Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 23 23:13:34.598548 containerd[1534]: time="2026-04-23T23:13:34.598500788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 23 23:13:34.598548 containerd[1534]: time="2026-04-23T23:13:34.598511980Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 23 23:13:34.598596 containerd[1534]: time="2026-04-23T23:13:34.598588596Z" level=info msg="runtime interface created" Apr 23 23:13:34.598596 containerd[1534]: time="2026-04-23T23:13:34.598593898Z" level=info msg="created NRI interface" Apr 23 23:13:34.598630 containerd[1534]: time="2026-04-23T23:13:34.598604147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 23 23:13:34.598630 containerd[1534]: time="2026-04-23T23:13:34.598618717Z" level=info msg="Connect containerd service" Apr 23 23:13:34.598661 containerd[1534]: time="2026-04-23T23:13:34.598642161Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 23 23:13:34.603459 containerd[1534]: time="2026-04-23T23:13:34.603151316Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:13:34.649475 systemd-logind[1504]: New seat seat0. Apr 23 23:13:34.653904 systemd-logind[1504]: Watching system buttons on /dev/input/event0 (Power Button) Apr 23 23:13:34.653926 systemd-logind[1504]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Apr 23 23:13:34.654993 systemd[1]: Started systemd-logind.service - User Login Management. Apr 23 23:13:34.670015 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 23 23:13:34.773708 containerd[1534]: time="2026-04-23T23:13:34.773646430Z" level=info msg="Start subscribing containerd event" Apr 23 23:13:34.773991 containerd[1534]: time="2026-04-23T23:13:34.773971783Z" level=info msg="Start recovering state" Apr 23 23:13:34.775184 containerd[1534]: time="2026-04-23T23:13:34.775153461Z" level=info msg="Start event monitor" Apr 23 23:13:34.775184 containerd[1534]: time="2026-04-23T23:13:34.775185663Z" level=info msg="Start cni network conf syncer for default" Apr 23 23:13:34.775279 containerd[1534]: time="2026-04-23T23:13:34.775197836Z" level=info msg="Start streaming server" Apr 23 23:13:34.775279 containerd[1534]: time="2026-04-23T23:13:34.775206672Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 23 23:13:34.775279 containerd[1534]: time="2026-04-23T23:13:34.775213819Z" level=info msg="runtime interface starting up..." Apr 23 23:13:34.775279 containerd[1534]: time="2026-04-23T23:13:34.775219513Z" level=info msg="starting plugins..." Apr 23 23:13:34.775279 containerd[1534]: time="2026-04-23T23:13:34.775235025Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 23 23:13:34.776244 containerd[1534]: time="2026-04-23T23:13:34.776130188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 23 23:13:34.776297 containerd[1534]: time="2026-04-23T23:13:34.776276744Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 23 23:13:34.778970 containerd[1534]: time="2026-04-23T23:13:34.778117140Z" level=info msg="containerd successfully booted in 0.261758s" Apr 23 23:13:34.778231 systemd[1]: Started containerd.service - containerd container runtime. Apr 23 23:13:34.893777 tar[1520]: linux-arm64/README.md Apr 23 23:13:34.912068 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 23 23:13:35.123094 systemd-networkd[1425]: eth0: Gained IPv6LL Apr 23 23:13:35.126680 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 23 23:13:35.129278 systemd[1]: Reached target network-online.target - Network is Online. Apr 23 23:13:35.133205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:13:35.140145 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 23 23:13:35.177695 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 23 23:13:35.378197 systemd-networkd[1425]: eth1: Gained IPv6LL Apr 23 23:13:35.733048 sshd_keygen[1537]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 23 23:13:35.754968 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 23 23:13:35.759252 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 23 23:13:35.780279 systemd[1]: issuegen.service: Deactivated successfully. Apr 23 23:13:35.780774 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 23 23:13:35.787249 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 23 23:13:35.804158 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 23 23:13:35.806959 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 23 23:13:35.810400 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 23 23:13:35.813338 systemd[1]: Reached target getty.target - Login Prompts. Apr 23 23:13:35.918661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:13:35.920501 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 23 23:13:35.925238 systemd[1]: Startup finished in 2.400s (kernel) + 6.754s (initrd) + 4.606s (userspace) = 13.762s. Apr 23 23:13:35.928732 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:13:36.452607 kubelet[1661]: E0423 23:13:36.452510 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:13:36.455582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:13:36.455715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:13:36.456039 systemd[1]: kubelet.service: Consumed 852ms CPU time, 259.3M memory peak. Apr 23 23:13:46.706436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 23 23:13:46.708722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:13:46.873819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:13:46.888660 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:13:46.933833 kubelet[1680]: E0423 23:13:46.933730 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:13:46.938294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:13:46.938613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:13:46.939236 systemd[1]: kubelet.service: Consumed 169ms CPU time, 105.5M memory peak. Apr 23 23:13:57.189013 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 23 23:13:57.193138 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:13:57.364820 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:13:57.372601 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:13:57.413226 kubelet[1694]: E0423 23:13:57.413179 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:13:57.416404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:13:57.416602 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:13:57.419224 systemd[1]: kubelet.service: Consumed 168ms CPU time, 106.9M memory peak. Apr 23 23:14:07.667446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 23 23:14:07.670449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:07.839074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:07.850566 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:14:07.897141 kubelet[1710]: E0423 23:14:07.897016 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:14:07.899982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:14:07.900282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:14:07.901007 systemd[1]: kubelet.service: Consumed 168ms CPU time, 106.7M memory peak. Apr 23 23:14:11.616809 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 23 23:14:11.618964 systemd[1]: Started sshd@0-49.13.208.85:22-50.85.169.122:60382.service - OpenSSH per-connection server daemon (50.85.169.122:60382). Apr 23 23:14:11.768719 sshd[1719]: Accepted publickey for core from 50.85.169.122 port 60382 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:11.771734 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:11.780921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 23 23:14:11.782268 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 23 23:14:11.794272 systemd-logind[1504]: New session 1 of user core. Apr 23 23:14:11.808325 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 23 23:14:11.813597 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 23 23:14:11.833804 (systemd)[1724]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 23 23:14:11.837140 systemd-logind[1504]: New session c1 of user core. Apr 23 23:14:11.966421 systemd[1724]: Queued start job for default target default.target. Apr 23 23:14:11.982098 systemd[1724]: Created slice app.slice - User Application Slice. Apr 23 23:14:11.982156 systemd[1724]: Reached target paths.target - Paths. Apr 23 23:14:11.982220 systemd[1724]: Reached target timers.target - Timers. Apr 23 23:14:11.984465 systemd[1724]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 23 23:14:12.005908 systemd[1724]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 23 23:14:12.006280 systemd[1724]: Reached target sockets.target - Sockets. Apr 23 23:14:12.006517 systemd[1724]: Reached target basic.target - Basic System. Apr 23 23:14:12.006709 systemd[1724]: Reached target default.target - Main User Target. Apr 23 23:14:12.006732 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 23 23:14:12.006992 systemd[1724]: Startup finished in 161ms. Apr 23 23:14:12.014569 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 23 23:14:12.077192 systemd[1]: Started sshd@1-49.13.208.85:22-50.85.169.122:60398.service - OpenSSH per-connection server daemon (50.85.169.122:60398). Apr 23 23:14:12.216123 sshd[1735]: Accepted publickey for core from 50.85.169.122 port 60398 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.218593 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.224909 systemd-logind[1504]: New session 2 of user core. Apr 23 23:14:12.233382 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 23 23:14:12.277575 sshd[1738]: Connection closed by 50.85.169.122 port 60398 Apr 23 23:14:12.278526 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:12.283700 systemd[1]: sshd@1-49.13.208.85:22-50.85.169.122:60398.service: Deactivated successfully. Apr 23 23:14:12.285931 systemd[1]: session-2.scope: Deactivated successfully. Apr 23 23:14:12.287165 systemd-logind[1504]: Session 2 logged out. Waiting for processes to exit. Apr 23 23:14:12.289686 systemd-logind[1504]: Removed session 2. Apr 23 23:14:12.305826 systemd[1]: Started sshd@2-49.13.208.85:22-50.85.169.122:60408.service - OpenSSH per-connection server daemon (50.85.169.122:60408). Apr 23 23:14:12.443103 sshd[1745]: Accepted publickey for core from 50.85.169.122 port 60408 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.445084 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.451109 systemd-logind[1504]: New session 3 of user core. Apr 23 23:14:12.460361 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 23 23:14:12.499003 sshd[1748]: Connection closed by 50.85.169.122 port 60408 Apr 23 23:14:12.500829 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:12.508486 systemd-logind[1504]: Session 3 logged out. Waiting for processes to exit. Apr 23 23:14:12.508865 systemd[1]: sshd@2-49.13.208.85:22-50.85.169.122:60408.service: Deactivated successfully. Apr 23 23:14:12.511738 systemd[1]: session-3.scope: Deactivated successfully. Apr 23 23:14:12.527996 systemd-logind[1504]: Removed session 3. Apr 23 23:14:12.529327 systemd[1]: Started sshd@3-49.13.208.85:22-50.85.169.122:60410.service - OpenSSH per-connection server daemon (50.85.169.122:60410). Apr 23 23:14:12.670106 sshd[1754]: Accepted publickey for core from 50.85.169.122 port 60410 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.671979 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.677911 systemd-logind[1504]: New session 4 of user core. Apr 23 23:14:12.685358 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 23 23:14:12.733428 sshd[1757]: Connection closed by 50.85.169.122 port 60410 Apr 23 23:14:12.734445 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:12.740909 systemd-logind[1504]: Session 4 logged out. Waiting for processes to exit. Apr 23 23:14:12.741131 systemd[1]: sshd@3-49.13.208.85:22-50.85.169.122:60410.service: Deactivated successfully. Apr 23 23:14:12.742701 systemd[1]: session-4.scope: Deactivated successfully. Apr 23 23:14:12.744426 systemd-logind[1504]: Removed session 4. Apr 23 23:14:12.762537 systemd[1]: Started sshd@4-49.13.208.85:22-50.85.169.122:60424.service - OpenSSH per-connection server daemon (50.85.169.122:60424). Apr 23 23:14:12.892804 sshd[1763]: Accepted publickey for core from 50.85.169.122 port 60424 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:12.895152 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:12.900152 systemd-logind[1504]: New session 5 of user core. Apr 23 23:14:12.904261 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 23 23:14:12.943016 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 23 23:14:12.943329 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:12.961200 sudo[1767]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:12.977175 sshd[1766]: Connection closed by 50.85.169.122 port 60424 Apr 23 23:14:12.978337 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:12.986249 systemd[1]: sshd@4-49.13.208.85:22-50.85.169.122:60424.service: Deactivated successfully. Apr 23 23:14:12.988405 systemd[1]: session-5.scope: Deactivated successfully. Apr 23 23:14:12.989460 systemd-logind[1504]: Session 5 logged out. Waiting for processes to exit. Apr 23 23:14:12.991400 systemd-logind[1504]: Removed session 5. Apr 23 23:14:13.005615 systemd[1]: Started sshd@5-49.13.208.85:22-50.85.169.122:60428.service - OpenSSH per-connection server daemon (50.85.169.122:60428). Apr 23 23:14:13.140274 sshd[1773]: Accepted publickey for core from 50.85.169.122 port 60428 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:13.142295 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:13.147737 systemd-logind[1504]: New session 6 of user core. Apr 23 23:14:13.154410 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 23 23:14:13.185326 sudo[1778]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 23 23:14:13.186104 sudo[1778]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:13.190978 sudo[1778]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:13.197248 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 23 23:14:13.197546 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:13.211962 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 23 23:14:13.261260 augenrules[1800]: No rules Apr 23 23:14:13.263014 systemd[1]: audit-rules.service: Deactivated successfully. Apr 23 23:14:13.263303 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 23 23:14:13.264929 sudo[1777]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:13.282065 sshd[1776]: Connection closed by 50.85.169.122 port 60428 Apr 23 23:14:13.282853 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:13.287095 systemd-logind[1504]: Session 6 logged out. Waiting for processes to exit. Apr 23 23:14:13.287669 systemd[1]: sshd@5-49.13.208.85:22-50.85.169.122:60428.service: Deactivated successfully. Apr 23 23:14:13.290467 systemd[1]: session-6.scope: Deactivated successfully. Apr 23 23:14:13.308145 systemd-logind[1504]: Removed session 6. Apr 23 23:14:13.309600 systemd[1]: Started sshd@6-49.13.208.85:22-50.85.169.122:60430.service - OpenSSH per-connection server daemon (50.85.169.122:60430). Apr 23 23:14:13.434182 sshd[1809]: Accepted publickey for core from 50.85.169.122 port 60430 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:14:13.436950 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:14:13.442083 systemd-logind[1504]: New session 7 of user core. Apr 23 23:14:13.449413 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 23 23:14:13.481975 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 23 23:14:13.482282 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 23 23:14:13.823451 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 23 23:14:13.836700 (dockerd)[1831]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 23 23:14:14.070215 dockerd[1831]: time="2026-04-23T23:14:14.070132649Z" level=info msg="Starting up" Apr 23 23:14:14.072376 dockerd[1831]: time="2026-04-23T23:14:14.072297167Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 23 23:14:14.088211 dockerd[1831]: time="2026-04-23T23:14:14.085419200Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 23 23:14:14.118133 systemd[1]: var-lib-docker-metacopy\x2dcheck1383049049-merged.mount: Deactivated successfully. Apr 23 23:14:14.128492 dockerd[1831]: time="2026-04-23T23:14:14.128430112Z" level=info msg="Loading containers: start." Apr 23 23:14:14.139079 kernel: Initializing XFRM netlink socket Apr 23 23:14:14.413817 systemd-networkd[1425]: docker0: Link UP Apr 23 23:14:14.418680 dockerd[1831]: time="2026-04-23T23:14:14.418627940Z" level=info msg="Loading containers: done." Apr 23 23:14:14.432915 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1318608691-merged.mount: Deactivated successfully. Apr 23 23:14:14.438868 dockerd[1831]: time="2026-04-23T23:14:14.438799907Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 23 23:14:14.439065 dockerd[1831]: time="2026-04-23T23:14:14.438955833Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 23 23:14:14.439159 dockerd[1831]: time="2026-04-23T23:14:14.439115079Z" level=info msg="Initializing buildkit" Apr 23 23:14:14.469496 dockerd[1831]: time="2026-04-23T23:14:14.469416252Z" level=info msg="Completed buildkit initialization" Apr 23 23:14:14.481954 dockerd[1831]: time="2026-04-23T23:14:14.481863941Z" level=info msg="Daemon has completed initialization" Apr 23 23:14:14.482148 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 23 23:14:14.483451 dockerd[1831]: time="2026-04-23T23:14:14.482871977Z" level=info msg="API listen on /run/docker.sock" Apr 23 23:14:14.977519 containerd[1534]: time="2026-04-23T23:14:14.977449417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 23 23:14:15.501484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount632161116.mount: Deactivated successfully. Apr 23 23:14:16.585051 containerd[1534]: time="2026-04-23T23:14:16.584314688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:16.586553 containerd[1534]: time="2026-04-23T23:14:16.586519079Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008885" Apr 23 23:14:16.587366 containerd[1534]: time="2026-04-23T23:14:16.587335466Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:16.591118 containerd[1534]: time="2026-04-23T23:14:16.591080787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:16.593315 containerd[1534]: time="2026-04-23T23:14:16.593167415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 1.615668196s" Apr 23 23:14:16.593315 containerd[1534]: time="2026-04-23T23:14:16.593210896Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 23 23:14:16.594324 containerd[1534]: time="2026-04-23T23:14:16.594247450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 23 23:14:17.969898 containerd[1534]: time="2026-04-23T23:14:17.969833004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:17.971243 containerd[1534]: time="2026-04-23T23:14:17.971194646Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297794" Apr 23 23:14:17.972585 containerd[1534]: time="2026-04-23T23:14:17.972140675Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:17.976147 containerd[1534]: time="2026-04-23T23:14:17.976104237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:17.977374 containerd[1534]: time="2026-04-23T23:14:17.977336555Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 1.383057544s" Apr 23 23:14:17.977508 containerd[1534]: time="2026-04-23T23:14:17.977491640Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 23 23:14:17.978570 containerd[1534]: time="2026-04-23T23:14:17.978544752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 23 23:14:18.007137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 23 23:14:18.010062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:18.173224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:18.185062 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:14:18.228788 kubelet[2114]: E0423 23:14:18.228667 2114 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:14:18.232910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:14:18.233187 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:14:18.235395 systemd[1]: kubelet.service: Consumed 172ms CPU time, 106.6M memory peak. Apr 23 23:14:19.056593 containerd[1534]: time="2026-04-23T23:14:19.056514319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:19.058240 containerd[1534]: time="2026-04-23T23:14:19.058181325Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141378" Apr 23 23:14:19.059748 containerd[1534]: time="2026-04-23T23:14:19.059667606Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:19.064858 containerd[1534]: time="2026-04-23T23:14:19.063472632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:19.064858 containerd[1534]: time="2026-04-23T23:14:19.064539462Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.085760702s" Apr 23 23:14:19.064858 containerd[1534]: time="2026-04-23T23:14:19.064577543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 23 23:14:19.065418 containerd[1534]: time="2026-04-23T23:14:19.065376525Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 23 23:14:19.168270 update_engine[1514]: I20260423 23:14:19.168138 1514 update_attempter.cc:509] Updating boot flags... Apr 23 23:14:19.952802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36276894.mount: Deactivated successfully. Apr 23 23:14:20.312438 containerd[1534]: time="2026-04-23T23:14:20.312372792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:20.315592 containerd[1534]: time="2026-04-23T23:14:20.315547996Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040534" Apr 23 23:14:20.317349 containerd[1534]: time="2026-04-23T23:14:20.317291722Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:20.320156 containerd[1534]: time="2026-04-23T23:14:20.320100756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:20.321487 containerd[1534]: time="2026-04-23T23:14:20.321361069Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.255935303s" Apr 23 23:14:20.321487 containerd[1534]: time="2026-04-23T23:14:20.321395030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 23 23:14:20.321991 containerd[1534]: time="2026-04-23T23:14:20.321764840Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 23 23:14:20.795794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4238438127.mount: Deactivated successfully. Apr 23 23:14:21.687162 containerd[1534]: time="2026-04-23T23:14:21.686407218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:21.688768 containerd[1534]: time="2026-04-23T23:14:21.688708676Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Apr 23 23:14:21.691677 containerd[1534]: time="2026-04-23T23:14:21.691123536Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:21.696256 containerd[1534]: time="2026-04-23T23:14:21.696185103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:21.700117 containerd[1534]: time="2026-04-23T23:14:21.700016080Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.378154358s" Apr 23 23:14:21.700117 containerd[1534]: time="2026-04-23T23:14:21.700105082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 23 23:14:21.701573 containerd[1534]: time="2026-04-23T23:14:21.701519077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 23 23:14:22.140664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2912598290.mount: Deactivated successfully. Apr 23 23:14:22.148072 containerd[1534]: time="2026-04-23T23:14:22.147758907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:22.149444 containerd[1534]: time="2026-04-23T23:14:22.149257663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Apr 23 23:14:22.150886 containerd[1534]: time="2026-04-23T23:14:22.150834701Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:22.153783 containerd[1534]: time="2026-04-23T23:14:22.153735970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 23 23:14:22.155514 containerd[1534]: time="2026-04-23T23:14:22.155478892Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 453.740649ms" Apr 23 23:14:22.155577 containerd[1534]: time="2026-04-23T23:14:22.155525413Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 23 23:14:22.156255 containerd[1534]: time="2026-04-23T23:14:22.156179589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 23 23:14:22.639168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741346019.mount: Deactivated successfully. Apr 23 23:14:23.476054 containerd[1534]: time="2026-04-23T23:14:23.474958992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:23.477722 containerd[1534]: time="2026-04-23T23:14:23.477679534Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886470" Apr 23 23:14:23.478676 containerd[1534]: time="2026-04-23T23:14:23.478640556Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:23.483112 containerd[1534]: time="2026-04-23T23:14:23.481934111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:23.483820 containerd[1534]: time="2026-04-23T23:14:23.483064536Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.326852467s" Apr 23 23:14:23.483910 containerd[1534]: time="2026-04-23T23:14:23.483897155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 23 23:14:28.256933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 23 23:14:28.262255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:28.421213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:28.431240 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 23 23:14:28.485403 kubelet[2297]: E0423 23:14:28.485330 2297 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 23 23:14:28.489254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 23 23:14:28.489389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 23 23:14:28.489669 systemd[1]: kubelet.service: Consumed 164ms CPU time, 104.8M memory peak. Apr 23 23:14:28.659311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:28.659752 systemd[1]: kubelet.service: Consumed 164ms CPU time, 104.8M memory peak. Apr 23 23:14:28.662670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:28.704558 systemd[1]: Reload requested from client PID 2311 ('systemctl') (unit session-7.scope)... Apr 23 23:14:28.704576 systemd[1]: Reloading... Apr 23 23:14:28.828067 zram_generator::config[2355]: No configuration found. Apr 23 23:14:29.027181 systemd[1]: Reloading finished in 322 ms. Apr 23 23:14:29.079733 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 23 23:14:29.079817 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 23 23:14:29.080298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:29.080350 systemd[1]: kubelet.service: Consumed 114ms CPU time, 94.9M memory peak. Apr 23 23:14:29.082781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:29.238796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:29.252419 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:14:29.299122 kubelet[2403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:14:29.299122 kubelet[2403]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:14:29.299122 kubelet[2403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:14:29.299122 kubelet[2403]: I0423 23:14:29.297953 2403 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:14:30.248083 kubelet[2403]: I0423 23:14:30.247992 2403 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 23 23:14:30.248262 kubelet[2403]: I0423 23:14:30.248251 2403 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:14:30.251046 kubelet[2403]: I0423 23:14:30.250949 2403 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:14:30.286068 kubelet[2403]: E0423 23:14:30.286009 2403 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://49.13.208.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.208.85:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 23 23:14:30.288074 kubelet[2403]: I0423 23:14:30.287517 2403 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:14:30.304111 kubelet[2403]: I0423 23:14:30.304079 2403 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:14:30.309065 kubelet[2403]: I0423 23:14:30.309011 2403 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 23 23:14:30.309476 kubelet[2403]: I0423 23:14:30.309441 2403 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:14:30.309696 kubelet[2403]: I0423 23:14:30.309537 2403 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-a35467bd0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:14:30.309830 kubelet[2403]: I0423 23:14:30.309815 2403 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:14:30.309890 kubelet[2403]: I0423 23:14:30.309881 2403 container_manager_linux.go:303] "Creating device plugin manager" Apr 23 23:14:30.310243 kubelet[2403]: I0423 23:14:30.310223 2403 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:30.317655 kubelet[2403]: I0423 23:14:30.317613 2403 kubelet.go:480] "Attempting to sync node with API server" Apr 23 23:14:30.317973 kubelet[2403]: I0423 23:14:30.317905 2403 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:14:30.319788 kubelet[2403]: I0423 23:14:30.319698 2403 kubelet.go:386] "Adding apiserver pod source" Apr 23 23:14:30.321893 kubelet[2403]: I0423 23:14:30.321787 2403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:14:30.330060 kubelet[2403]: E0423 23:14:30.329227 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://49.13.208.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-n-a35467bd0b&limit=500&resourceVersion=0\": dial tcp 49.13.208.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 23 23:14:30.330060 kubelet[2403]: I0423 23:14:30.329391 2403 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:14:30.331428 kubelet[2403]: I0423 23:14:30.331382 2403 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:14:30.331590 kubelet[2403]: W0423 23:14:30.331567 2403 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 23 23:14:30.342078 kubelet[2403]: I0423 23:14:30.342051 2403 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 23:14:30.342249 kubelet[2403]: I0423 23:14:30.342239 2403 server.go:1289] "Started kubelet" Apr 23 23:14:30.349647 kubelet[2403]: E0423 23:14:30.349594 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://49.13.208.85:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.208.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 23 23:14:30.350111 kubelet[2403]: I0423 23:14:30.350054 2403 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:14:30.352586 kubelet[2403]: I0423 23:14:30.352557 2403 server.go:317] "Adding debug handlers to kubelet server" Apr 23 23:14:30.357862 kubelet[2403]: I0423 23:14:30.357835 2403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:14:30.361092 kubelet[2403]: E0423 23:14:30.359088 2403 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.208.85:6443/api/v1/namespaces/default/events\": dial tcp 49.13.208.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-a35467bd0b.18a91f6a85906f0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-a35467bd0b,UID:ci-4459-2-4-n-a35467bd0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-a35467bd0b,},FirstTimestamp:2026-04-23 23:14:30.342201102 +0000 UTC m=+1.085275931,LastTimestamp:2026-04-23 23:14:30.342201102 +0000 UTC m=+1.085275931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-a35467bd0b,}" Apr 23 23:14:30.361633 kubelet[2403]: I0423 23:14:30.361608 2403 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:14:30.364184 kubelet[2403]: I0423 23:14:30.358076 2403 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:14:30.364517 kubelet[2403]: I0423 23:14:30.364498 2403 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:14:30.365517 kubelet[2403]: E0423 23:14:30.365496 2403 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" Apr 23 23:14:30.365683 kubelet[2403]: I0423 23:14:30.365672 2403 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 23:14:30.366070 kubelet[2403]: I0423 23:14:30.366053 2403 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 23:14:30.366201 kubelet[2403]: I0423 23:14:30.366191 2403 reconciler.go:26] "Reconciler: start to sync state" Apr 23 23:14:30.366885 kubelet[2403]: E0423 23:14:30.366863 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://49.13.208.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.208.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 23 23:14:30.368147 kubelet[2403]: E0423 23:14:30.367593 2403 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:14:30.368863 kubelet[2403]: E0423 23:14:30.368647 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.208.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a35467bd0b?timeout=10s\": dial tcp 49.13.208.85:6443: connect: connection refused" interval="200ms" Apr 23 23:14:30.369311 kubelet[2403]: I0423 23:14:30.369290 2403 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:14:30.369777 kubelet[2403]: I0423 23:14:30.369706 2403 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:14:30.369952 kubelet[2403]: I0423 23:14:30.369907 2403 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:14:30.382971 kubelet[2403]: I0423 23:14:30.382928 2403 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:14:30.382971 kubelet[2403]: I0423 23:14:30.382949 2403 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:14:30.382971 kubelet[2403]: I0423 23:14:30.382982 2403 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:30.388058 kubelet[2403]: I0423 23:14:30.387878 2403 policy_none.go:49] "None policy: Start" Apr 23 23:14:30.388058 kubelet[2403]: I0423 23:14:30.387910 2403 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 23:14:30.388058 kubelet[2403]: I0423 23:14:30.387937 2403 state_mem.go:35] "Initializing new in-memory state store" Apr 23 23:14:30.388926 kubelet[2403]: I0423 23:14:30.388761 2403 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 23:14:30.390423 kubelet[2403]: I0423 23:14:30.390399 2403 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 23:14:30.390489 kubelet[2403]: I0423 23:14:30.390432 2403 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 23:14:30.390489 kubelet[2403]: I0423 23:14:30.390453 2403 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:14:30.390489 kubelet[2403]: I0423 23:14:30.390461 2403 kubelet.go:2436] "Starting kubelet main sync loop" Apr 23 23:14:30.390564 kubelet[2403]: E0423 23:14:30.390504 2403 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:14:30.395823 kubelet[2403]: E0423 23:14:30.395142 2403 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://49.13.208.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.208.85:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 23 23:14:30.397236 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 23 23:14:30.412141 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 23 23:14:30.416182 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 23 23:14:30.428258 kubelet[2403]: E0423 23:14:30.428212 2403 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:14:30.431312 kubelet[2403]: I0423 23:14:30.431290 2403 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:14:30.431576 kubelet[2403]: I0423 23:14:30.431538 2403 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:14:30.433162 kubelet[2403]: I0423 23:14:30.433137 2403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:14:30.434391 kubelet[2403]: E0423 23:14:30.434275 2403 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:14:30.434653 kubelet[2403]: E0423 23:14:30.434579 2403 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-n-a35467bd0b\" not found" Apr 23 23:14:30.507172 systemd[1]: Created slice kubepods-burstable-pod17d4e089f1e05e40158787aaa9ab9bf6.slice - libcontainer container kubepods-burstable-pod17d4e089f1e05e40158787aaa9ab9bf6.slice. Apr 23 23:14:30.524050 kubelet[2403]: E0423 23:14:30.523930 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.527845 systemd[1]: Created slice kubepods-burstable-podac4c3e16d07712f96c9c7df7611d9bf2.slice - libcontainer container kubepods-burstable-podac4c3e16d07712f96c9c7df7611d9bf2.slice. Apr 23 23:14:30.531395 kubelet[2403]: E0423 23:14:30.531120 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.534318 kubelet[2403]: I0423 23:14:30.534276 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.534843 kubelet[2403]: E0423 23:14:30.534802 2403 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.208.85:6443/api/v1/nodes\": dial tcp 49.13.208.85:6443: connect: connection refused" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.540081 systemd[1]: Created slice kubepods-burstable-pod55ae120d7b6f4e7ed003159d87a20dd2.slice - libcontainer container kubepods-burstable-pod55ae120d7b6f4e7ed003159d87a20dd2.slice. Apr 23 23:14:30.542431 kubelet[2403]: E0423 23:14:30.542406 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568051 kubelet[2403]: I0423 23:14:30.567971 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17d4e089f1e05e40158787aaa9ab9bf6-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" (UID: \"17d4e089f1e05e40158787aaa9ab9bf6\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568238 kubelet[2403]: I0423 23:14:30.568082 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17d4e089f1e05e40158787aaa9ab9bf6-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" (UID: \"17d4e089f1e05e40158787aaa9ab9bf6\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568238 kubelet[2403]: I0423 23:14:30.568126 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17d4e089f1e05e40158787aaa9ab9bf6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" (UID: \"17d4e089f1e05e40158787aaa9ab9bf6\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568238 kubelet[2403]: I0423 23:14:30.568166 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568238 kubelet[2403]: I0423 23:14:30.568204 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568462 kubelet[2403]: I0423 23:14:30.568257 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568462 kubelet[2403]: I0423 23:14:30.568298 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568462 kubelet[2403]: I0423 23:14:30.568335 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.568462 kubelet[2403]: I0423 23:14:30.568376 2403 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55ae120d7b6f4e7ed003159d87a20dd2-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-a35467bd0b\" (UID: \"55ae120d7b6f4e7ed003159d87a20dd2\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.570554 kubelet[2403]: E0423 23:14:30.570498 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.208.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a35467bd0b?timeout=10s\": dial tcp 49.13.208.85:6443: connect: connection refused" interval="400ms" Apr 23 23:14:30.738446 kubelet[2403]: I0423 23:14:30.738404 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.739234 kubelet[2403]: E0423 23:14:30.738941 2403 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.208.85:6443/api/v1/nodes\": dial tcp 49.13.208.85:6443: connect: connection refused" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:30.828152 containerd[1534]: time="2026-04-23T23:14:30.827371504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-a35467bd0b,Uid:17d4e089f1e05e40158787aaa9ab9bf6,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:30.832263 containerd[1534]: time="2026-04-23T23:14:30.832192584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-a35467bd0b,Uid:ac4c3e16d07712f96c9c7df7611d9bf2,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:30.844691 containerd[1534]: time="2026-04-23T23:14:30.844372987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-a35467bd0b,Uid:55ae120d7b6f4e7ed003159d87a20dd2,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:30.870736 containerd[1534]: time="2026-04-23T23:14:30.870674745Z" level=info msg="connecting to shim 4fb3e79221936cb9702e74e8e0fdc67262801abe648b0c6caf7c76731504d319" address="unix:///run/containerd/s/f665bae2a7462a9b24aa1fa318e25e5b05919cba4c5a732afbd37815f84b04a0" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:30.872042 containerd[1534]: time="2026-04-23T23:14:30.871892885Z" level=info msg="connecting to shim 2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88" address="unix:///run/containerd/s/92f7dd97be91b31ad6df31f221ffefabeb3666adf2135ebcd4b3f7f861f3d4f4" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:30.913134 containerd[1534]: time="2026-04-23T23:14:30.912031714Z" level=info msg="connecting to shim efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0" address="unix:///run/containerd/s/cc83f0076ca31c456e013418d1de8266642a63d941fa19ac965fe59de2fc1ceb" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:30.919301 systemd[1]: Started cri-containerd-4fb3e79221936cb9702e74e8e0fdc67262801abe648b0c6caf7c76731504d319.scope - libcontainer container 4fb3e79221936cb9702e74e8e0fdc67262801abe648b0c6caf7c76731504d319. Apr 23 23:14:30.925511 systemd[1]: Started cri-containerd-2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88.scope - libcontainer container 2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88. Apr 23 23:14:30.955369 systemd[1]: Started cri-containerd-efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0.scope - libcontainer container efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0. Apr 23 23:14:30.972780 kubelet[2403]: E0423 23:14:30.972709 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.208.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a35467bd0b?timeout=10s\": dial tcp 49.13.208.85:6443: connect: connection refused" interval="800ms" Apr 23 23:14:31.006986 containerd[1534]: time="2026-04-23T23:14:31.006864290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-n-a35467bd0b,Uid:17d4e089f1e05e40158787aaa9ab9bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fb3e79221936cb9702e74e8e0fdc67262801abe648b0c6caf7c76731504d319\"" Apr 23 23:14:31.013284 kubelet[2403]: E0423 23:14:31.013108 2403 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.208.85:6443/api/v1/namespaces/default/events\": dial tcp 49.13.208.85:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-n-a35467bd0b.18a91f6a85906f0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-n-a35467bd0b,UID:ci-4459-2-4-n-a35467bd0b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-a35467bd0b,},FirstTimestamp:2026-04-23 23:14:30.342201102 +0000 UTC m=+1.085275931,LastTimestamp:2026-04-23 23:14:30.342201102 +0000 UTC m=+1.085275931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-a35467bd0b,}" Apr 23 23:14:31.016676 containerd[1534]: time="2026-04-23T23:14:31.016452283Z" level=info msg="CreateContainer within sandbox \"4fb3e79221936cb9702e74e8e0fdc67262801abe648b0c6caf7c76731504d319\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 23 23:14:31.025637 containerd[1534]: time="2026-04-23T23:14:31.025594589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-n-a35467bd0b,Uid:ac4c3e16d07712f96c9c7df7611d9bf2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88\"" Apr 23 23:14:31.031408 containerd[1534]: time="2026-04-23T23:14:31.031365881Z" level=info msg="CreateContainer within sandbox \"2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 23 23:14:31.041780 containerd[1534]: time="2026-04-23T23:14:31.041742007Z" level=info msg="Container 8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:31.042373 containerd[1534]: time="2026-04-23T23:14:31.042340937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-n-a35467bd0b,Uid:55ae120d7b6f4e7ed003159d87a20dd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0\"" Apr 23 23:14:31.049244 containerd[1534]: time="2026-04-23T23:14:31.049198047Z" level=info msg="Container 5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:31.051456 containerd[1534]: time="2026-04-23T23:14:31.051386642Z" level=info msg="CreateContainer within sandbox \"efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 23 23:14:31.053086 containerd[1534]: time="2026-04-23T23:14:31.053009627Z" level=info msg="CreateContainer within sandbox \"4fb3e79221936cb9702e74e8e0fdc67262801abe648b0c6caf7c76731504d319\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913\"" Apr 23 23:14:31.054317 containerd[1534]: time="2026-04-23T23:14:31.054278408Z" level=info msg="StartContainer for \"8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913\"" Apr 23 23:14:31.055874 containerd[1534]: time="2026-04-23T23:14:31.055831553Z" level=info msg="connecting to shim 8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913" address="unix:///run/containerd/s/f665bae2a7462a9b24aa1fa318e25e5b05919cba4c5a732afbd37815f84b04a0" protocol=ttrpc version=3 Apr 23 23:14:31.059096 containerd[1534]: time="2026-04-23T23:14:31.059040084Z" level=info msg="CreateContainer within sandbox \"2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e\"" Apr 23 23:14:31.059672 containerd[1534]: time="2026-04-23T23:14:31.059644734Z" level=info msg="StartContainer for \"5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e\"" Apr 23 23:14:31.060934 containerd[1534]: time="2026-04-23T23:14:31.060770072Z" level=info msg="connecting to shim 5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e" address="unix:///run/containerd/s/92f7dd97be91b31ad6df31f221ffefabeb3666adf2135ebcd4b3f7f861f3d4f4" protocol=ttrpc version=3 Apr 23 23:14:31.067256 containerd[1534]: time="2026-04-23T23:14:31.067214055Z" level=info msg="Container edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:31.084925 containerd[1534]: time="2026-04-23T23:14:31.084210926Z" level=info msg="CreateContainer within sandbox \"efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd\"" Apr 23 23:14:31.085526 containerd[1534]: time="2026-04-23T23:14:31.085486427Z" level=info msg="StartContainer for \"edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd\"" Apr 23 23:14:31.086234 systemd[1]: Started cri-containerd-8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913.scope - libcontainer container 8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913. Apr 23 23:14:31.086913 containerd[1534]: time="2026-04-23T23:14:31.086770767Z" level=info msg="connecting to shim edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd" address="unix:///run/containerd/s/cc83f0076ca31c456e013418d1de8266642a63d941fa19ac965fe59de2fc1ceb" protocol=ttrpc version=3 Apr 23 23:14:31.094385 systemd[1]: Started cri-containerd-5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e.scope - libcontainer container 5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e. Apr 23 23:14:31.114529 systemd[1]: Started cri-containerd-edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd.scope - libcontainer container edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd. Apr 23 23:14:31.146293 kubelet[2403]: I0423 23:14:31.146143 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:31.148102 kubelet[2403]: E0423 23:14:31.148064 2403 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.208.85:6443/api/v1/nodes\": dial tcp 49.13.208.85:6443: connect: connection refused" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:31.178389 containerd[1534]: time="2026-04-23T23:14:31.178235629Z" level=info msg="StartContainer for \"8c058281cc693a5c4831951cdab1bda9c18fc398579e9e570aab0781443b4913\" returns successfully" Apr 23 23:14:31.195834 containerd[1534]: time="2026-04-23T23:14:31.195620347Z" level=info msg="StartContainer for \"5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e\" returns successfully" Apr 23 23:14:31.218837 containerd[1534]: time="2026-04-23T23:14:31.218709276Z" level=info msg="StartContainer for \"edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd\" returns successfully" Apr 23 23:14:31.412134 kubelet[2403]: E0423 23:14:31.410552 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:31.418068 kubelet[2403]: E0423 23:14:31.418002 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:31.421358 kubelet[2403]: E0423 23:14:31.421324 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:31.862713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362280913.mount: Deactivated successfully. Apr 23 23:14:31.952557 kubelet[2403]: I0423 23:14:31.952508 2403 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.423743 kubelet[2403]: E0423 23:14:32.423693 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.425362 kubelet[2403]: E0423 23:14:32.425323 2403 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.845530 kubelet[2403]: E0423 23:14:32.845468 2403 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-n-a35467bd0b\" not found" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.899183 kubelet[2403]: I0423 23:14:32.899123 2403 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.968655 kubelet[2403]: I0423 23:14:32.968590 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.983289 kubelet[2403]: E0423 23:14:32.983241 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.983289 kubelet[2403]: I0423 23:14:32.983282 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.987349 kubelet[2403]: E0423 23:14:32.987312 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.987465 kubelet[2403]: I0423 23:14:32.987347 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:32.991395 kubelet[2403]: E0423 23:14:32.991355 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-a35467bd0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:33.166619 kubelet[2403]: I0423 23:14:33.166340 2403 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:33.169661 kubelet[2403]: E0423 23:14:33.169564 2403 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:33.347053 kubelet[2403]: I0423 23:14:33.346897 2403 apiserver.go:52] "Watching apiserver" Apr 23 23:14:33.366715 kubelet[2403]: I0423 23:14:33.366648 2403 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 23:14:35.210425 systemd[1]: Reload requested from client PID 2688 ('systemctl') (unit session-7.scope)... Apr 23 23:14:35.210776 systemd[1]: Reloading... Apr 23 23:14:35.328053 zram_generator::config[2735]: No configuration found. Apr 23 23:14:35.547529 systemd[1]: Reloading finished in 336 ms. Apr 23 23:14:35.579611 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:35.596717 systemd[1]: kubelet.service: Deactivated successfully. Apr 23 23:14:35.597458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:35.597736 systemd[1]: kubelet.service: Consumed 1.498s CPU time, 127.9M memory peak. Apr 23 23:14:35.601439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 23 23:14:35.769774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 23 23:14:35.786049 (kubelet)[2777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 23 23:14:35.856437 kubelet[2777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:14:35.856437 kubelet[2777]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 23 23:14:35.856437 kubelet[2777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 23 23:14:35.856437 kubelet[2777]: I0423 23:14:35.855472 2777 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 23 23:14:35.869258 kubelet[2777]: I0423 23:14:35.869182 2777 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 23 23:14:35.869258 kubelet[2777]: I0423 23:14:35.869231 2777 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 23 23:14:35.869602 kubelet[2777]: I0423 23:14:35.869509 2777 server.go:956] "Client rotation is on, will bootstrap in background" Apr 23 23:14:35.871164 kubelet[2777]: I0423 23:14:35.871134 2777 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 23 23:14:35.875890 kubelet[2777]: I0423 23:14:35.875132 2777 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 23 23:14:35.886343 kubelet[2777]: I0423 23:14:35.886142 2777 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 23 23:14:35.889340 kubelet[2777]: I0423 23:14:35.889284 2777 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 23 23:14:35.889504 kubelet[2777]: I0423 23:14:35.889466 2777 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 23 23:14:35.889655 kubelet[2777]: I0423 23:14:35.889495 2777 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-n-a35467bd0b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 23 23:14:35.889816 kubelet[2777]: I0423 23:14:35.889660 2777 topology_manager.go:138] "Creating topology manager with none policy" Apr 23 23:14:35.889816 kubelet[2777]: I0423 23:14:35.889670 2777 container_manager_linux.go:303] "Creating device plugin manager" Apr 23 23:14:35.889816 kubelet[2777]: I0423 23:14:35.889714 2777 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:35.890142 kubelet[2777]: I0423 23:14:35.889868 2777 kubelet.go:480] "Attempting to sync node with API server" Apr 23 23:14:35.890142 kubelet[2777]: I0423 23:14:35.889881 2777 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 23 23:14:35.890142 kubelet[2777]: I0423 23:14:35.889902 2777 kubelet.go:386] "Adding apiserver pod source" Apr 23 23:14:35.890142 kubelet[2777]: I0423 23:14:35.889918 2777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 23 23:14:35.892443 kubelet[2777]: I0423 23:14:35.892400 2777 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 23 23:14:35.893144 kubelet[2777]: I0423 23:14:35.893117 2777 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 23 23:14:35.896418 kubelet[2777]: I0423 23:14:35.896387 2777 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 23 23:14:35.896501 kubelet[2777]: I0423 23:14:35.896443 2777 server.go:1289] "Started kubelet" Apr 23 23:14:35.900191 kubelet[2777]: I0423 23:14:35.898720 2777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 23 23:14:35.905988 kubelet[2777]: I0423 23:14:35.905925 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 23 23:14:35.910374 kubelet[2777]: I0423 23:14:35.910297 2777 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 23 23:14:35.913040 kubelet[2777]: I0423 23:14:35.911595 2777 server.go:317] "Adding debug handlers to kubelet server" Apr 23 23:14:35.916421 kubelet[2777]: I0423 23:14:35.915492 2777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 23 23:14:35.916421 kubelet[2777]: I0423 23:14:35.915705 2777 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 23 23:14:35.916421 kubelet[2777]: I0423 23:14:35.915926 2777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 23 23:14:35.918637 kubelet[2777]: I0423 23:14:35.916979 2777 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 23 23:14:35.928670 kubelet[2777]: I0423 23:14:35.928640 2777 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 23 23:14:35.928930 kubelet[2777]: I0423 23:14:35.928918 2777 reconciler.go:26] "Reconciler: start to sync state" Apr 23 23:14:35.932310 kubelet[2777]: I0423 23:14:35.930654 2777 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 23 23:14:35.932310 kubelet[2777]: I0423 23:14:35.930684 2777 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 23 23:14:35.932310 kubelet[2777]: I0423 23:14:35.930700 2777 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 23 23:14:35.932310 kubelet[2777]: I0423 23:14:35.930710 2777 kubelet.go:2436] "Starting kubelet main sync loop" Apr 23 23:14:35.932310 kubelet[2777]: E0423 23:14:35.930752 2777 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 23 23:14:35.936098 kubelet[2777]: E0423 23:14:35.934244 2777 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 23 23:14:35.936098 kubelet[2777]: I0423 23:14:35.935381 2777 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 23 23:14:35.938035 kubelet[2777]: I0423 23:14:35.936837 2777 factory.go:223] Registration of the containerd container factory successfully Apr 23 23:14:35.938035 kubelet[2777]: I0423 23:14:35.936853 2777 factory.go:223] Registration of the systemd container factory successfully Apr 23 23:14:36.002800 kubelet[2777]: I0423 23:14:36.002759 2777 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 23 23:14:36.002800 kubelet[2777]: I0423 23:14:36.002788 2777 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 23 23:14:36.003107 kubelet[2777]: I0423 23:14:36.002845 2777 state_mem.go:36] "Initialized new in-memory state store" Apr 23 23:14:36.003107 kubelet[2777]: I0423 23:14:36.003012 2777 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 23 23:14:36.003107 kubelet[2777]: I0423 23:14:36.003053 2777 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 23 23:14:36.003107 kubelet[2777]: I0423 23:14:36.003073 2777 policy_none.go:49] "None policy: Start" Apr 23 23:14:36.003107 kubelet[2777]: I0423 23:14:36.003082 2777 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 23 23:14:36.003107 kubelet[2777]: I0423 23:14:36.003093 2777 state_mem.go:35] "Initializing new in-memory state store" Apr 23 23:14:36.003619 kubelet[2777]: I0423 23:14:36.003181 2777 state_mem.go:75] "Updated machine memory state" Apr 23 23:14:36.007720 kubelet[2777]: E0423 23:14:36.007663 2777 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 23 23:14:36.007881 kubelet[2777]: I0423 23:14:36.007849 2777 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 23 23:14:36.007930 kubelet[2777]: I0423 23:14:36.007868 2777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 23 23:14:36.008903 kubelet[2777]: I0423 23:14:36.008885 2777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 23 23:14:36.010598 kubelet[2777]: E0423 23:14:36.009853 2777 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 23 23:14:36.032266 kubelet[2777]: I0423 23:14:36.032215 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.034130 kubelet[2777]: I0423 23:14:36.034081 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.034130 kubelet[2777]: I0423 23:14:36.034134 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.120176 kubelet[2777]: I0423 23:14:36.119509 2777 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130360 kubelet[2777]: I0423 23:14:36.130306 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130360 kubelet[2777]: I0423 23:14:36.130347 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/17d4e089f1e05e40158787aaa9ab9bf6-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" (UID: \"17d4e089f1e05e40158787aaa9ab9bf6\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130603 kubelet[2777]: I0423 23:14:36.130376 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130603 kubelet[2777]: I0423 23:14:36.130398 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130603 kubelet[2777]: I0423 23:14:36.130418 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55ae120d7b6f4e7ed003159d87a20dd2-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-n-a35467bd0b\" (UID: \"55ae120d7b6f4e7ed003159d87a20dd2\") " pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130603 kubelet[2777]: I0423 23:14:36.130433 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/17d4e089f1e05e40158787aaa9ab9bf6-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" (UID: \"17d4e089f1e05e40158787aaa9ab9bf6\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130603 kubelet[2777]: I0423 23:14:36.130452 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/17d4e089f1e05e40158787aaa9ab9bf6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" (UID: \"17d4e089f1e05e40158787aaa9ab9bf6\") " pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130774 kubelet[2777]: I0423 23:14:36.130468 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.130774 kubelet[2777]: I0423 23:14:36.130485 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac4c3e16d07712f96c9c7df7611d9bf2-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" (UID: \"ac4c3e16d07712f96c9c7df7611d9bf2\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.133541 kubelet[2777]: I0423 23:14:36.133496 2777 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.133799 kubelet[2777]: I0423 23:14:36.133589 2777 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.890822 kubelet[2777]: I0423 23:14:36.890788 2777 apiserver.go:52] "Watching apiserver" Apr 23 23:14:36.929744 kubelet[2777]: I0423 23:14:36.928906 2777 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 23 23:14:36.988538 kubelet[2777]: I0423 23:14:36.987575 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.989341 kubelet[2777]: I0423 23:14:36.988891 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:36.990233 kubelet[2777]: I0423 23:14:36.989954 2777 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:37.002124 kubelet[2777]: E0423 23:14:37.002015 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-n-a35467bd0b\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:37.003912 kubelet[2777]: E0423 23:14:37.003870 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-n-a35467bd0b\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:37.004830 kubelet[2777]: E0423 23:14:37.004809 2777 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-n-a35467bd0b\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" Apr 23 23:14:37.049062 kubelet[2777]: I0423 23:14:37.047397 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-n-a35467bd0b" podStartSLOduration=1.0473679 podStartE2EDuration="1.0473679s" podCreationTimestamp="2026-04-23 23:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:37.045286113 +0000 UTC m=+1.254514215" watchObservedRunningTime="2026-04-23 23:14:37.0473679 +0000 UTC m=+1.256595962" Apr 23 23:14:37.049062 kubelet[2777]: I0423 23:14:37.047644 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-n-a35467bd0b" podStartSLOduration=1.047631103 podStartE2EDuration="1.047631103s" podCreationTimestamp="2026-04-23 23:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:37.031408256 +0000 UTC m=+1.240636318" watchObservedRunningTime="2026-04-23 23:14:37.047631103 +0000 UTC m=+1.256859165" Apr 23 23:14:37.061405 kubelet[2777]: I0423 23:14:37.061285 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-n-a35467bd0b" podStartSLOduration=1.061255197 podStartE2EDuration="1.061255197s" podCreationTimestamp="2026-04-23 23:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:37.061121555 +0000 UTC m=+1.270349617" watchObservedRunningTime="2026-04-23 23:14:37.061255197 +0000 UTC m=+1.270483219" Apr 23 23:14:39.809242 kubelet[2777]: I0423 23:14:39.809179 2777 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 23 23:14:39.810401 containerd[1534]: time="2026-04-23T23:14:39.810337924Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 23 23:14:39.811179 kubelet[2777]: I0423 23:14:39.810723 2777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 23 23:14:40.874629 systemd[1]: Created slice kubepods-besteffort-podef719f19_5f1d_4a26_a16a_e5acdc686993.slice - libcontainer container kubepods-besteffort-podef719f19_5f1d_4a26_a16a_e5acdc686993.slice. Apr 23 23:14:40.963727 kubelet[2777]: I0423 23:14:40.963647 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef719f19-5f1d-4a26-a16a-e5acdc686993-xtables-lock\") pod \"kube-proxy-tmrhd\" (UID: \"ef719f19-5f1d-4a26-a16a-e5acdc686993\") " pod="kube-system/kube-proxy-tmrhd" Apr 23 23:14:40.964667 kubelet[2777]: I0423 23:14:40.964453 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef719f19-5f1d-4a26-a16a-e5acdc686993-lib-modules\") pod \"kube-proxy-tmrhd\" (UID: \"ef719f19-5f1d-4a26-a16a-e5acdc686993\") " pod="kube-system/kube-proxy-tmrhd" Apr 23 23:14:40.964667 kubelet[2777]: I0423 23:14:40.964510 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-284kr\" (UniqueName: \"kubernetes.io/projected/ef719f19-5f1d-4a26-a16a-e5acdc686993-kube-api-access-284kr\") pod \"kube-proxy-tmrhd\" (UID: \"ef719f19-5f1d-4a26-a16a-e5acdc686993\") " pod="kube-system/kube-proxy-tmrhd" Apr 23 23:14:40.964667 kubelet[2777]: I0423 23:14:40.964558 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef719f19-5f1d-4a26-a16a-e5acdc686993-kube-proxy\") pod \"kube-proxy-tmrhd\" (UID: \"ef719f19-5f1d-4a26-a16a-e5acdc686993\") " pod="kube-system/kube-proxy-tmrhd" Apr 23 23:14:41.062759 systemd[1]: Created slice kubepods-besteffort-pod06d94110_ba2c_4260_aa31_f8468915c427.slice - libcontainer container kubepods-besteffort-pod06d94110_ba2c_4260_aa31_f8468915c427.slice. Apr 23 23:14:41.065461 kubelet[2777]: I0423 23:14:41.065406 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/06d94110-ba2c-4260-aa31-f8468915c427-var-lib-calico\") pod \"tigera-operator-8458958b4d-bffzk\" (UID: \"06d94110-ba2c-4260-aa31-f8468915c427\") " pod="tigera-operator/tigera-operator-8458958b4d-bffzk" Apr 23 23:14:41.065801 kubelet[2777]: I0423 23:14:41.065656 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s24mq\" (UniqueName: \"kubernetes.io/projected/06d94110-ba2c-4260-aa31-f8468915c427-kube-api-access-s24mq\") pod \"tigera-operator-8458958b4d-bffzk\" (UID: \"06d94110-ba2c-4260-aa31-f8468915c427\") " pod="tigera-operator/tigera-operator-8458958b4d-bffzk" Apr 23 23:14:41.187064 containerd[1534]: time="2026-04-23T23:14:41.186401666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmrhd,Uid:ef719f19-5f1d-4a26-a16a-e5acdc686993,Namespace:kube-system,Attempt:0,}" Apr 23 23:14:41.210646 containerd[1534]: time="2026-04-23T23:14:41.210507976Z" level=info msg="connecting to shim 09788408d1d7e6e1d8fdebeabf42e446cabae1abb24d00b6d4c92956f4df9e52" address="unix:///run/containerd/s/08d7c245030375f5fe46daaef885009fe63d748bb0f5328dd5bbe3f3497402ad" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:41.242450 systemd[1]: Started cri-containerd-09788408d1d7e6e1d8fdebeabf42e446cabae1abb24d00b6d4c92956f4df9e52.scope - libcontainer container 09788408d1d7e6e1d8fdebeabf42e446cabae1abb24d00b6d4c92956f4df9e52. Apr 23 23:14:41.275032 containerd[1534]: time="2026-04-23T23:14:41.274967498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tmrhd,Uid:ef719f19-5f1d-4a26-a16a-e5acdc686993,Namespace:kube-system,Attempt:0,} returns sandbox id \"09788408d1d7e6e1d8fdebeabf42e446cabae1abb24d00b6d4c92956f4df9e52\"" Apr 23 23:14:41.285854 containerd[1534]: time="2026-04-23T23:14:41.285742779Z" level=info msg="CreateContainer within sandbox \"09788408d1d7e6e1d8fdebeabf42e446cabae1abb24d00b6d4c92956f4df9e52\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 23 23:14:41.301912 containerd[1534]: time="2026-04-23T23:14:41.301838559Z" level=info msg="Container 0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:41.313888 containerd[1534]: time="2026-04-23T23:14:41.313806733Z" level=info msg="CreateContainer within sandbox \"09788408d1d7e6e1d8fdebeabf42e446cabae1abb24d00b6d4c92956f4df9e52\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199\"" Apr 23 23:14:41.315400 containerd[1534]: time="2026-04-23T23:14:41.315348071Z" level=info msg="StartContainer for \"0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199\"" Apr 23 23:14:41.317382 containerd[1534]: time="2026-04-23T23:14:41.317310333Z" level=info msg="connecting to shim 0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199" address="unix:///run/containerd/s/08d7c245030375f5fe46daaef885009fe63d748bb0f5328dd5bbe3f3497402ad" protocol=ttrpc version=3 Apr 23 23:14:41.341275 systemd[1]: Started cri-containerd-0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199.scope - libcontainer container 0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199. Apr 23 23:14:41.371050 containerd[1534]: time="2026-04-23T23:14:41.370936773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-bffzk,Uid:06d94110-ba2c-4260-aa31-f8468915c427,Namespace:tigera-operator,Attempt:0,}" Apr 23 23:14:41.402006 containerd[1534]: time="2026-04-23T23:14:41.401301513Z" level=info msg="connecting to shim 29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9" address="unix:///run/containerd/s/701e9a73a6dda6aa5ea03d8ebd50c879cc3d292584b30da2b3b6a11ae33e82be" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:41.414053 containerd[1534]: time="2026-04-23T23:14:41.413958815Z" level=info msg="StartContainer for \"0c26216133333d1cc3b4ae4a194f4c531ddd58ae3a64fb814182456cddb31199\" returns successfully" Apr 23 23:14:41.441295 systemd[1]: Started cri-containerd-29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9.scope - libcontainer container 29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9. Apr 23 23:14:41.503311 containerd[1534]: time="2026-04-23T23:14:41.503248495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-bffzk,Uid:06d94110-ba2c-4260-aa31-f8468915c427,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9\"" Apr 23 23:14:41.505805 containerd[1534]: time="2026-04-23T23:14:41.505758243Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 23 23:14:43.109896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668145879.mount: Deactivated successfully. Apr 23 23:14:43.575636 containerd[1534]: time="2026-04-23T23:14:43.575342606Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:43.577143 containerd[1534]: time="2026-04-23T23:14:43.577068624Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=24868969" Apr 23 23:14:43.578237 containerd[1534]: time="2026-04-23T23:14:43.578190356Z" level=info msg="ImageCreate event name:\"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:43.582101 containerd[1534]: time="2026-04-23T23:14:43.582014516Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:43.582657 containerd[1534]: time="2026-04-23T23:14:43.582586122Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"24864964\" in 2.076779878s" Apr 23 23:14:43.582657 containerd[1534]: time="2026-04-23T23:14:43.582624522Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:f37773829212e34063aa0c4c18558c40f2fc7ce0c68e8139b71af2ff71e26790\"" Apr 23 23:14:43.592253 containerd[1534]: time="2026-04-23T23:14:43.592212864Z" level=info msg="CreateContainer within sandbox \"29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 23 23:14:43.604067 containerd[1534]: time="2026-04-23T23:14:43.603469463Z" level=info msg="Container 8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:43.617055 containerd[1534]: time="2026-04-23T23:14:43.615185066Z" level=info msg="CreateContainer within sandbox \"29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b\"" Apr 23 23:14:43.618773 containerd[1534]: time="2026-04-23T23:14:43.618399740Z" level=info msg="StartContainer for \"8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b\"" Apr 23 23:14:43.620753 containerd[1534]: time="2026-04-23T23:14:43.620009597Z" level=info msg="connecting to shim 8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b" address="unix:///run/containerd/s/701e9a73a6dda6aa5ea03d8ebd50c879cc3d292584b30da2b3b6a11ae33e82be" protocol=ttrpc version=3 Apr 23 23:14:43.647259 systemd[1]: Started cri-containerd-8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b.scope - libcontainer container 8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b. Apr 23 23:14:43.691689 containerd[1534]: time="2026-04-23T23:14:43.691641754Z" level=info msg="StartContainer for \"8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b\" returns successfully" Apr 23 23:14:44.027669 kubelet[2777]: I0423 23:14:44.026825 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tmrhd" podStartSLOduration=4.026807086 podStartE2EDuration="4.026807086s" podCreationTimestamp="2026-04-23 23:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:14:42.024239043 +0000 UTC m=+6.233467065" watchObservedRunningTime="2026-04-23 23:14:44.026807086 +0000 UTC m=+8.236035108" Apr 23 23:14:44.919381 kubelet[2777]: I0423 23:14:44.919270 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8458958b4d-bffzk" podStartSLOduration=2.837101397 podStartE2EDuration="4.919238972s" podCreationTimestamp="2026-04-23 23:14:40 +0000 UTC" firstStartedPulling="2026-04-23 23:14:41.505388079 +0000 UTC m=+5.714616101" lastFinishedPulling="2026-04-23 23:14:43.587525654 +0000 UTC m=+7.796753676" observedRunningTime="2026-04-23 23:14:44.028105339 +0000 UTC m=+8.237333401" watchObservedRunningTime="2026-04-23 23:14:44.919238972 +0000 UTC m=+9.128466994" Apr 23 23:14:50.067150 sudo[1813]: pam_unix(sudo:session): session closed for user root Apr 23 23:14:50.084900 sshd[1812]: Connection closed by 50.85.169.122 port 60430 Apr 23 23:14:50.086227 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Apr 23 23:14:50.091503 systemd[1]: sshd@6-49.13.208.85:22-50.85.169.122:60430.service: Deactivated successfully. Apr 23 23:14:50.096142 systemd[1]: session-7.scope: Deactivated successfully. Apr 23 23:14:50.097894 systemd[1]: session-7.scope: Consumed 7.425s CPU time, 222.5M memory peak. Apr 23 23:14:50.104097 systemd-logind[1504]: Session 7 logged out. Waiting for processes to exit. Apr 23 23:14:50.107895 systemd-logind[1504]: Removed session 7. Apr 23 23:14:54.812327 systemd[1]: Created slice kubepods-besteffort-pod33395e66_8d65_49e3_b764_aa8e9cbb72f1.slice - libcontainer container kubepods-besteffort-pod33395e66_8d65_49e3_b764_aa8e9cbb72f1.slice. Apr 23 23:14:54.856929 kubelet[2777]: I0423 23:14:54.856872 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/33395e66-8d65-49e3-b764-aa8e9cbb72f1-typha-certs\") pod \"calico-typha-8544754984-vgm9r\" (UID: \"33395e66-8d65-49e3-b764-aa8e9cbb72f1\") " pod="calico-system/calico-typha-8544754984-vgm9r" Apr 23 23:14:54.856929 kubelet[2777]: I0423 23:14:54.856922 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33395e66-8d65-49e3-b764-aa8e9cbb72f1-tigera-ca-bundle\") pod \"calico-typha-8544754984-vgm9r\" (UID: \"33395e66-8d65-49e3-b764-aa8e9cbb72f1\") " pod="calico-system/calico-typha-8544754984-vgm9r" Apr 23 23:14:54.857353 kubelet[2777]: I0423 23:14:54.856948 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6v5\" (UniqueName: \"kubernetes.io/projected/33395e66-8d65-49e3-b764-aa8e9cbb72f1-kube-api-access-lp6v5\") pod \"calico-typha-8544754984-vgm9r\" (UID: \"33395e66-8d65-49e3-b764-aa8e9cbb72f1\") " pod="calico-system/calico-typha-8544754984-vgm9r" Apr 23 23:14:54.957329 kubelet[2777]: I0423 23:14:54.957240 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-flexvol-driver-host\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957329 kubelet[2777]: I0423 23:14:54.957287 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-node-certs\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957329 kubelet[2777]: I0423 23:14:54.957306 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-var-run-calico\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957329 kubelet[2777]: I0423 23:14:54.957334 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-policysync\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957565 kubelet[2777]: I0423 23:14:54.957349 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-xtables-lock\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957565 kubelet[2777]: I0423 23:14:54.957364 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f7z9\" (UniqueName: \"kubernetes.io/projected/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-kube-api-access-2f7z9\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957565 kubelet[2777]: I0423 23:14:54.957392 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-cni-bin-dir\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957565 kubelet[2777]: I0423 23:14:54.957406 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-lib-modules\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957565 kubelet[2777]: I0423 23:14:54.957420 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-nodeproc\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957669 kubelet[2777]: I0423 23:14:54.957447 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-bpffs\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957669 kubelet[2777]: I0423 23:14:54.957462 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-tigera-ca-bundle\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957669 kubelet[2777]: I0423 23:14:54.957478 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-var-lib-calico\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957669 kubelet[2777]: I0423 23:14:54.957495 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-cni-log-dir\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957669 kubelet[2777]: I0423 23:14:54.957512 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-cni-net-dir\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.957786 kubelet[2777]: I0423 23:14:54.957530 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/85b122a3-2c3a-4346-bd8b-002c9a9cd5c0-sys-fs\") pod \"calico-node-lpdxj\" (UID: \"85b122a3-2c3a-4346-bd8b-002c9a9cd5c0\") " pod="calico-system/calico-node-lpdxj" Apr 23 23:14:54.959326 systemd[1]: Created slice kubepods-besteffort-pod85b122a3_2c3a_4346_bd8b_002c9a9cd5c0.slice - libcontainer container kubepods-besteffort-pod85b122a3_2c3a_4346_bd8b_002c9a9cd5c0.slice. Apr 23 23:14:55.048264 kubelet[2777]: E0423 23:14:55.047592 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:14:55.058422 kubelet[2777]: I0423 23:14:55.058306 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c8088906-90c9-4538-98fa-60ada979bc32-registration-dir\") pod \"csi-node-driver-w65pl\" (UID: \"c8088906-90c9-4538-98fa-60ada979bc32\") " pod="calico-system/csi-node-driver-w65pl" Apr 23 23:14:55.059702 kubelet[2777]: I0423 23:14:55.058571 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c8088906-90c9-4538-98fa-60ada979bc32-kubelet-dir\") pod \"csi-node-driver-w65pl\" (UID: \"c8088906-90c9-4538-98fa-60ada979bc32\") " pod="calico-system/csi-node-driver-w65pl" Apr 23 23:14:55.059702 kubelet[2777]: I0423 23:14:55.058732 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmbc\" (UniqueName: \"kubernetes.io/projected/c8088906-90c9-4538-98fa-60ada979bc32-kube-api-access-8jmbc\") pod \"csi-node-driver-w65pl\" (UID: \"c8088906-90c9-4538-98fa-60ada979bc32\") " pod="calico-system/csi-node-driver-w65pl" Apr 23 23:14:55.059702 kubelet[2777]: I0423 23:14:55.058909 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c8088906-90c9-4538-98fa-60ada979bc32-socket-dir\") pod \"csi-node-driver-w65pl\" (UID: \"c8088906-90c9-4538-98fa-60ada979bc32\") " pod="calico-system/csi-node-driver-w65pl" Apr 23 23:14:55.059702 kubelet[2777]: I0423 23:14:55.059081 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c8088906-90c9-4538-98fa-60ada979bc32-varrun\") pod \"csi-node-driver-w65pl\" (UID: \"c8088906-90c9-4538-98fa-60ada979bc32\") " pod="calico-system/csi-node-driver-w65pl" Apr 23 23:14:55.070961 kubelet[2777]: E0423 23:14:55.068932 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.070961 kubelet[2777]: W0423 23:14:55.070232 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.070961 kubelet[2777]: E0423 23:14:55.070316 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.101123 kubelet[2777]: E0423 23:14:55.100406 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.101717 kubelet[2777]: W0423 23:14:55.101289 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.101717 kubelet[2777]: E0423 23:14:55.101340 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.117919 containerd[1534]: time="2026-04-23T23:14:55.117427108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8544754984-vgm9r,Uid:33395e66-8d65-49e3-b764-aa8e9cbb72f1,Namespace:calico-system,Attempt:0,}" Apr 23 23:14:55.153340 containerd[1534]: time="2026-04-23T23:14:55.153235676Z" level=info msg="connecting to shim 72c6fa0d19f8358e59873e1bb89a3a20559c3ff257dd8ab0a6e6308dbe44c21e" address="unix:///run/containerd/s/0e9721a9b2523fe862ca511e3efa84fe38132cd433ea928cea5ce03c3860d995" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:55.159857 kubelet[2777]: E0423 23:14:55.159781 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.159857 kubelet[2777]: W0423 23:14:55.159808 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.159857 kubelet[2777]: E0423 23:14:55.159828 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.160529 kubelet[2777]: E0423 23:14:55.160444 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.160529 kubelet[2777]: W0423 23:14:55.160458 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.161270 kubelet[2777]: E0423 23:14:55.160471 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.161604 kubelet[2777]: E0423 23:14:55.161565 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.161604 kubelet[2777]: W0423 23:14:55.161580 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.161604 kubelet[2777]: E0423 23:14:55.161593 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.162226 kubelet[2777]: E0423 23:14:55.162211 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.162370 kubelet[2777]: W0423 23:14:55.162299 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.162370 kubelet[2777]: E0423 23:14:55.162317 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.163130 kubelet[2777]: E0423 23:14:55.163089 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.163130 kubelet[2777]: W0423 23:14:55.163105 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.163130 kubelet[2777]: E0423 23:14:55.163117 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.163471 kubelet[2777]: E0423 23:14:55.163458 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.163622 kubelet[2777]: W0423 23:14:55.163547 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.163622 kubelet[2777]: E0423 23:14:55.163562 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.163959 kubelet[2777]: E0423 23:14:55.163926 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.163959 kubelet[2777]: W0423 23:14:55.163938 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.163959 kubelet[2777]: E0423 23:14:55.163949 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.164956 kubelet[2777]: E0423 23:14:55.164809 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.164956 kubelet[2777]: W0423 23:14:55.164821 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.164956 kubelet[2777]: E0423 23:14:55.164832 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.165275 kubelet[2777]: E0423 23:14:55.165240 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.165275 kubelet[2777]: W0423 23:14:55.165254 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.165275 kubelet[2777]: E0423 23:14:55.165264 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.166267 kubelet[2777]: E0423 23:14:55.166167 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.166267 kubelet[2777]: W0423 23:14:55.166179 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.166267 kubelet[2777]: E0423 23:14:55.166190 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.167186 kubelet[2777]: E0423 23:14:55.166880 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.167186 kubelet[2777]: W0423 23:14:55.166895 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.167186 kubelet[2777]: E0423 23:14:55.166906 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.168281 kubelet[2777]: E0423 23:14:55.168109 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.168281 kubelet[2777]: W0423 23:14:55.168124 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.168281 kubelet[2777]: E0423 23:14:55.168135 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.168475 kubelet[2777]: E0423 23:14:55.168464 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.168595 kubelet[2777]: W0423 23:14:55.168517 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.168595 kubelet[2777]: E0423 23:14:55.168530 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.169195 kubelet[2777]: E0423 23:14:55.169140 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.169195 kubelet[2777]: W0423 23:14:55.169153 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.169195 kubelet[2777]: E0423 23:14:55.169165 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.170217 kubelet[2777]: E0423 23:14:55.170077 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.170217 kubelet[2777]: W0423 23:14:55.170088 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.170217 kubelet[2777]: E0423 23:14:55.170099 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.170529 kubelet[2777]: E0423 23:14:55.170441 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.170529 kubelet[2777]: W0423 23:14:55.170454 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.170529 kubelet[2777]: E0423 23:14:55.170465 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.171034 kubelet[2777]: E0423 23:14:55.170954 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.171034 kubelet[2777]: W0423 23:14:55.170966 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.171034 kubelet[2777]: E0423 23:14:55.170976 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.171439 kubelet[2777]: E0423 23:14:55.171406 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.171439 kubelet[2777]: W0423 23:14:55.171418 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.171439 kubelet[2777]: E0423 23:14:55.171427 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.171838 kubelet[2777]: E0423 23:14:55.171804 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.171838 kubelet[2777]: W0423 23:14:55.171816 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.171838 kubelet[2777]: E0423 23:14:55.171826 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.172238 kubelet[2777]: E0423 23:14:55.172206 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.172238 kubelet[2777]: W0423 23:14:55.172217 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.172238 kubelet[2777]: E0423 23:14:55.172226 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.172703 kubelet[2777]: E0423 23:14:55.172688 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.172832 kubelet[2777]: W0423 23:14:55.172735 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.172832 kubelet[2777]: E0423 23:14:55.172747 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.173617 kubelet[2777]: E0423 23:14:55.173600 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.173820 kubelet[2777]: W0423 23:14:55.173664 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.173820 kubelet[2777]: E0423 23:14:55.173699 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.174372 kubelet[2777]: E0423 23:14:55.174328 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.174686 kubelet[2777]: W0423 23:14:55.174610 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.174686 kubelet[2777]: E0423 23:14:55.174629 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.175033 kubelet[2777]: E0423 23:14:55.174991 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.175033 kubelet[2777]: W0423 23:14:55.175004 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.175033 kubelet[2777]: E0423 23:14:55.175014 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.175572 kubelet[2777]: E0423 23:14:55.175557 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.175738 kubelet[2777]: W0423 23:14:55.175645 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.175738 kubelet[2777]: E0423 23:14:55.175659 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.186309 systemd[1]: Started cri-containerd-72c6fa0d19f8358e59873e1bb89a3a20559c3ff257dd8ab0a6e6308dbe44c21e.scope - libcontainer container 72c6fa0d19f8358e59873e1bb89a3a20559c3ff257dd8ab0a6e6308dbe44c21e. Apr 23 23:14:55.195946 kubelet[2777]: E0423 23:14:55.195884 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:55.195946 kubelet[2777]: W0423 23:14:55.195910 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:55.196334 kubelet[2777]: E0423 23:14:55.196204 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:55.237265 containerd[1534]: time="2026-04-23T23:14:55.237213833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8544754984-vgm9r,Uid:33395e66-8d65-49e3-b764-aa8e9cbb72f1,Namespace:calico-system,Attempt:0,} returns sandbox id \"72c6fa0d19f8358e59873e1bb89a3a20559c3ff257dd8ab0a6e6308dbe44c21e\"" Apr 23 23:14:55.239778 containerd[1534]: time="2026-04-23T23:14:55.239686733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 23 23:14:55.274808 containerd[1534]: time="2026-04-23T23:14:55.274750096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpdxj,Uid:85b122a3-2c3a-4346-bd8b-002c9a9cd5c0,Namespace:calico-system,Attempt:0,}" Apr 23 23:14:55.298672 containerd[1534]: time="2026-04-23T23:14:55.298591568Z" level=info msg="connecting to shim 752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9" address="unix:///run/containerd/s/c629e07e6aac5292494b5c6ece8eb6546f07ed663aa196bde58e46b8d0948f8f" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:14:55.330333 systemd[1]: Started cri-containerd-752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9.scope - libcontainer container 752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9. Apr 23 23:14:55.370866 containerd[1534]: time="2026-04-23T23:14:55.370815670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lpdxj,Uid:85b122a3-2c3a-4346-bd8b-002c9a9cd5c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\"" Apr 23 23:14:56.761631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839040331.mount: Deactivated successfully. Apr 23 23:14:56.931975 kubelet[2777]: E0423 23:14:56.931902 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:14:57.191231 containerd[1534]: time="2026-04-23T23:14:57.191058998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:57.193055 containerd[1534]: time="2026-04-23T23:14:57.192925253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=32841445" Apr 23 23:14:57.193778 containerd[1534]: time="2026-04-23T23:14:57.193683618Z" level=info msg="ImageCreate event name:\"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:57.197363 containerd[1534]: time="2026-04-23T23:14:57.197272166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:57.198237 containerd[1534]: time="2026-04-23T23:14:57.197604369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"32841299\" in 1.957678354s" Apr 23 23:14:57.198237 containerd[1534]: time="2026-04-23T23:14:57.197638889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:265c145eea96693e7abfe97a68dee913c8e656947f5708c28e4e866d3809b4c9\"" Apr 23 23:14:57.200300 containerd[1534]: time="2026-04-23T23:14:57.199922187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 23 23:14:57.219328 containerd[1534]: time="2026-04-23T23:14:57.219284698Z" level=info msg="CreateContainer within sandbox \"72c6fa0d19f8358e59873e1bb89a3a20559c3ff257dd8ab0a6e6308dbe44c21e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 23 23:14:57.228193 containerd[1534]: time="2026-04-23T23:14:57.228143287Z" level=info msg="Container 2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:57.237447 containerd[1534]: time="2026-04-23T23:14:57.237386839Z" level=info msg="CreateContainer within sandbox \"72c6fa0d19f8358e59873e1bb89a3a20559c3ff257dd8ab0a6e6308dbe44c21e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6\"" Apr 23 23:14:57.240057 containerd[1534]: time="2026-04-23T23:14:57.238601329Z" level=info msg="StartContainer for \"2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6\"" Apr 23 23:14:57.240521 containerd[1534]: time="2026-04-23T23:14:57.240455783Z" level=info msg="connecting to shim 2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6" address="unix:///run/containerd/s/0e9721a9b2523fe862ca511e3efa84fe38132cd433ea928cea5ce03c3860d995" protocol=ttrpc version=3 Apr 23 23:14:57.276318 systemd[1]: Started cri-containerd-2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6.scope - libcontainer container 2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6. Apr 23 23:14:57.325334 containerd[1534]: time="2026-04-23T23:14:57.325250925Z" level=info msg="StartContainer for \"2d992408962a0db707b6d1eb9149e81226fccc4e5cc6b251522f1b79b47cfcc6\" returns successfully" Apr 23 23:14:58.070274 kubelet[2777]: E0423 23:14:58.070224 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.070274 kubelet[2777]: W0423 23:14:58.070252 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.070274 kubelet[2777]: E0423 23:14:58.070274 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.070847 kubelet[2777]: E0423 23:14:58.070414 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.070847 kubelet[2777]: W0423 23:14:58.070421 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.070847 kubelet[2777]: E0423 23:14:58.070458 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.070963 kubelet[2777]: E0423 23:14:58.070937 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.070963 kubelet[2777]: W0423 23:14:58.070947 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.070963 kubelet[2777]: E0423 23:14:58.070958 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.071215 kubelet[2777]: E0423 23:14:58.071159 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.071215 kubelet[2777]: W0423 23:14:58.071194 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.071215 kubelet[2777]: E0423 23:14:58.071204 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.072408 kubelet[2777]: E0423 23:14:58.072385 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.072698 kubelet[2777]: W0423 23:14:58.072525 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.072698 kubelet[2777]: E0423 23:14:58.072560 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.073235 kubelet[2777]: E0423 23:14:58.073006 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.073235 kubelet[2777]: W0423 23:14:58.073067 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.073235 kubelet[2777]: E0423 23:14:58.073085 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.073503 kubelet[2777]: E0423 23:14:58.073484 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.073608 kubelet[2777]: W0423 23:14:58.073590 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.073696 kubelet[2777]: E0423 23:14:58.073679 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.074132 kubelet[2777]: E0423 23:14:58.073990 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.074132 kubelet[2777]: W0423 23:14:58.074005 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.074132 kubelet[2777]: E0423 23:14:58.074016 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.074299 kubelet[2777]: E0423 23:14:58.074286 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.074350 kubelet[2777]: W0423 23:14:58.074340 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.074404 kubelet[2777]: E0423 23:14:58.074393 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.074611 kubelet[2777]: E0423 23:14:58.074598 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.074784 kubelet[2777]: W0423 23:14:58.074674 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.074784 kubelet[2777]: E0423 23:14:58.074689 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.074921 kubelet[2777]: E0423 23:14:58.074908 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.075047 kubelet[2777]: W0423 23:14:58.074970 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.075047 kubelet[2777]: E0423 23:14:58.074986 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.075439 kubelet[2777]: E0423 23:14:58.075326 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.075439 kubelet[2777]: W0423 23:14:58.075340 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.075439 kubelet[2777]: E0423 23:14:58.075351 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.075608 kubelet[2777]: E0423 23:14:58.075595 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.075665 kubelet[2777]: W0423 23:14:58.075654 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.075747 kubelet[2777]: E0423 23:14:58.075716 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.075983 kubelet[2777]: E0423 23:14:58.075970 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.076197 kubelet[2777]: W0423 23:14:58.076081 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.076197 kubelet[2777]: E0423 23:14:58.076101 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.076488 kubelet[2777]: E0423 23:14:58.076395 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.076488 kubelet[2777]: W0423 23:14:58.076408 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.076488 kubelet[2777]: E0423 23:14:58.076419 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.089281 kubelet[2777]: E0423 23:14:58.089219 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.089281 kubelet[2777]: W0423 23:14:58.089255 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.089281 kubelet[2777]: E0423 23:14:58.089282 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.089996 kubelet[2777]: E0423 23:14:58.089592 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.089996 kubelet[2777]: W0423 23:14:58.089606 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.089996 kubelet[2777]: E0423 23:14:58.089624 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.090817 kubelet[2777]: E0423 23:14:58.090615 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.090817 kubelet[2777]: W0423 23:14:58.090651 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.090817 kubelet[2777]: E0423 23:14:58.090677 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.091302 kubelet[2777]: E0423 23:14:58.091280 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.091554 kubelet[2777]: W0423 23:14:58.091426 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.091554 kubelet[2777]: E0423 23:14:58.091459 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.092013 kubelet[2777]: E0423 23:14:58.091987 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.092316 kubelet[2777]: W0423 23:14:58.092174 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.092316 kubelet[2777]: E0423 23:14:58.092208 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.092756 kubelet[2777]: E0423 23:14:58.092712 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.092984 kubelet[2777]: W0423 23:14:58.092868 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.092984 kubelet[2777]: E0423 23:14:58.092888 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.093173 kubelet[2777]: E0423 23:14:58.093161 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.093231 kubelet[2777]: W0423 23:14:58.093218 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.093292 kubelet[2777]: E0423 23:14:58.093280 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.093617 kubelet[2777]: E0423 23:14:58.093541 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.093617 kubelet[2777]: W0423 23:14:58.093556 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.093617 kubelet[2777]: E0423 23:14:58.093567 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.093887 kubelet[2777]: E0423 23:14:58.093874 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.093973 kubelet[2777]: W0423 23:14:58.093944 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.093973 kubelet[2777]: E0423 23:14:58.093962 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.094307 kubelet[2777]: E0423 23:14:58.094218 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.094307 kubelet[2777]: W0423 23:14:58.094230 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.094307 kubelet[2777]: E0423 23:14:58.094240 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.094687 kubelet[2777]: E0423 23:14:58.094530 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.094687 kubelet[2777]: W0423 23:14:58.094542 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.094687 kubelet[2777]: E0423 23:14:58.094551 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.094820 kubelet[2777]: E0423 23:14:58.094798 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.094820 kubelet[2777]: W0423 23:14:58.094816 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.094874 kubelet[2777]: E0423 23:14:58.094829 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.095215 kubelet[2777]: E0423 23:14:58.095199 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.095215 kubelet[2777]: W0423 23:14:58.095217 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.095303 kubelet[2777]: E0423 23:14:58.095230 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.095397 kubelet[2777]: E0423 23:14:58.095386 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.095434 kubelet[2777]: W0423 23:14:58.095397 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.095434 kubelet[2777]: E0423 23:14:58.095406 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.095541 kubelet[2777]: E0423 23:14:58.095531 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.095570 kubelet[2777]: W0423 23:14:58.095541 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.095570 kubelet[2777]: E0423 23:14:58.095550 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.095664 kubelet[2777]: E0423 23:14:58.095656 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.095697 kubelet[2777]: W0423 23:14:58.095665 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.095697 kubelet[2777]: E0423 23:14:58.095673 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.095902 kubelet[2777]: E0423 23:14:58.095889 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.095935 kubelet[2777]: W0423 23:14:58.095904 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.095935 kubelet[2777]: E0423 23:14:58.095916 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.096279 kubelet[2777]: E0423 23:14:58.096266 2777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 23 23:14:58.096324 kubelet[2777]: W0423 23:14:58.096279 2777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 23 23:14:58.096324 kubelet[2777]: E0423 23:14:58.096291 2777 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 23 23:14:58.657069 containerd[1534]: time="2026-04-23T23:14:58.656987317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:58.659277 containerd[1534]: time="2026-04-23T23:14:58.658839051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4404646" Apr 23 23:14:58.660919 containerd[1534]: time="2026-04-23T23:14:58.660878506Z" level=info msg="ImageCreate event name:\"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:58.664150 containerd[1534]: time="2026-04-23T23:14:58.664106931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:14:58.665351 containerd[1534]: time="2026-04-23T23:14:58.665298300Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"6980245\" in 1.465318592s" Apr 23 23:14:58.665440 containerd[1534]: time="2026-04-23T23:14:58.665351581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:3867b4c2eaa3321472d76c87dc2b4f8d6cdd45473f2138098e7ef206bc16d421\"" Apr 23 23:14:58.673114 containerd[1534]: time="2026-04-23T23:14:58.673073080Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 23 23:14:58.683046 containerd[1534]: time="2026-04-23T23:14:58.682967636Z" level=info msg="Container 5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:14:58.702557 containerd[1534]: time="2026-04-23T23:14:58.702495226Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5\"" Apr 23 23:14:58.703631 containerd[1534]: time="2026-04-23T23:14:58.703354753Z" level=info msg="StartContainer for \"5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5\"" Apr 23 23:14:58.705569 containerd[1534]: time="2026-04-23T23:14:58.705531650Z" level=info msg="connecting to shim 5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5" address="unix:///run/containerd/s/c629e07e6aac5292494b5c6ece8eb6546f07ed663aa196bde58e46b8d0948f8f" protocol=ttrpc version=3 Apr 23 23:14:58.731246 systemd[1]: Started cri-containerd-5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5.scope - libcontainer container 5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5. Apr 23 23:14:58.793961 containerd[1534]: time="2026-04-23T23:14:58.793853928Z" level=info msg="StartContainer for \"5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5\" returns successfully" Apr 23 23:14:58.816798 systemd[1]: cri-containerd-5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5.scope: Deactivated successfully. Apr 23 23:14:58.823853 containerd[1534]: time="2026-04-23T23:14:58.823790238Z" level=info msg="received container exit event container_id:\"5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5\" id:\"5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5\" pid:3403 exited_at:{seconds:1776986098 nanos:823135233}" Apr 23 23:14:58.856267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa30452d2dd96afdae80e4578dfc452af2696988c2544cf38db83db350723d5-rootfs.mount: Deactivated successfully. Apr 23 23:14:58.932183 kubelet[2777]: E0423 23:14:58.931606 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:14:59.065726 kubelet[2777]: I0423 23:14:59.065662 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:14:59.068935 containerd[1534]: time="2026-04-23T23:14:59.068862194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 23 23:14:59.093326 kubelet[2777]: I0423 23:14:59.090199 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8544754984-vgm9r" podStartSLOduration=3.1297096189999998 podStartE2EDuration="5.090177235s" podCreationTimestamp="2026-04-23 23:14:54 +0000 UTC" firstStartedPulling="2026-04-23 23:14:55.239038528 +0000 UTC m=+19.448266550" lastFinishedPulling="2026-04-23 23:14:57.199506104 +0000 UTC m=+21.408734166" observedRunningTime="2026-04-23 23:14:58.07956268 +0000 UTC m=+22.288790702" watchObservedRunningTime="2026-04-23 23:14:59.090177235 +0000 UTC m=+23.299405257" Apr 23 23:15:00.931979 kubelet[2777]: E0423 23:15:00.931855 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:15:02.931789 kubelet[2777]: E0423 23:15:02.931628 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:15:04.932133 kubelet[2777]: E0423 23:15:04.931938 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:15:06.571661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100737829.mount: Deactivated successfully. Apr 23 23:15:06.592710 containerd[1534]: time="2026-04-23T23:15:06.592595102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:06.594850 containerd[1534]: time="2026-04-23T23:15:06.594787477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=153029581" Apr 23 23:15:06.596329 containerd[1534]: time="2026-04-23T23:15:06.596225767Z" level=info msg="ImageCreate event name:\"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:06.600766 containerd[1534]: time="2026-04-23T23:15:06.600693278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:06.603670 containerd[1534]: time="2026-04-23T23:15:06.602677292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"153029443\" in 7.533733018s" Apr 23 23:15:06.603670 containerd[1534]: time="2026-04-23T23:15:06.602720773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:5a8f90ba0ad45873b37c9c512d6391f35086ced5c27f20cfc5c45f777f9941b3\"" Apr 23 23:15:06.610526 containerd[1534]: time="2026-04-23T23:15:06.610464987Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 23 23:15:06.622414 containerd[1534]: time="2026-04-23T23:15:06.622366190Z" level=info msg="Container d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:06.639445 containerd[1534]: time="2026-04-23T23:15:06.639384628Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2\"" Apr 23 23:15:06.642392 containerd[1534]: time="2026-04-23T23:15:06.642286448Z" level=info msg="StartContainer for \"d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2\"" Apr 23 23:15:06.645084 containerd[1534]: time="2026-04-23T23:15:06.644582904Z" level=info msg="connecting to shim d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2" address="unix:///run/containerd/s/c629e07e6aac5292494b5c6ece8eb6546f07ed663aa196bde58e46b8d0948f8f" protocol=ttrpc version=3 Apr 23 23:15:06.679260 systemd[1]: Started cri-containerd-d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2.scope - libcontainer container d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2. Apr 23 23:15:06.753335 containerd[1534]: time="2026-04-23T23:15:06.753215222Z" level=info msg="StartContainer for \"d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2\" returns successfully" Apr 23 23:15:06.869565 systemd[1]: cri-containerd-d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2.scope: Deactivated successfully. Apr 23 23:15:06.876528 containerd[1534]: time="2026-04-23T23:15:06.876424521Z" level=info msg="received container exit event container_id:\"d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2\" id:\"d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2\" pid:3460 exited_at:{seconds:1776986106 nanos:876086039}" Apr 23 23:15:06.901376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8a8fcc10a788807f11f8203807254e917a8a1aad371878bdea038525dbab0b2-rootfs.mount: Deactivated successfully. Apr 23 23:15:06.933500 kubelet[2777]: E0423 23:15:06.933092 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:15:08.103686 containerd[1534]: time="2026-04-23T23:15:08.103631079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 23 23:15:08.301512 kubelet[2777]: I0423 23:15:08.300794 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:15:08.931952 kubelet[2777]: E0423 23:15:08.931862 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:15:10.932237 kubelet[2777]: E0423 23:15:10.932076 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w65pl" podUID="c8088906-90c9-4538-98fa-60ada979bc32" Apr 23 23:15:11.228245 containerd[1534]: time="2026-04-23T23:15:11.227853682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:11.229788 containerd[1534]: time="2026-04-23T23:15:11.229724896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=62266008" Apr 23 23:15:11.230348 containerd[1534]: time="2026-04-23T23:15:11.230314020Z" level=info msg="ImageCreate event name:\"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:11.233490 containerd[1534]: time="2026-04-23T23:15:11.233447203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:11.234439 containerd[1534]: time="2026-04-23T23:15:11.234389129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"64841647\" in 3.130046805s" Apr 23 23:15:11.234656 containerd[1534]: time="2026-04-23T23:15:11.234634291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:0636f5f0fe5e716fd01c674abaaef326193e41f0291d3a9b0ce572a82500c211\"" Apr 23 23:15:11.242341 containerd[1534]: time="2026-04-23T23:15:11.242283546Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 23 23:15:11.254370 containerd[1534]: time="2026-04-23T23:15:11.252685381Z" level=info msg="Container e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:11.273365 containerd[1534]: time="2026-04-23T23:15:11.273284410Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7\"" Apr 23 23:15:11.275101 containerd[1534]: time="2026-04-23T23:15:11.274209136Z" level=info msg="StartContainer for \"e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7\"" Apr 23 23:15:11.276186 containerd[1534]: time="2026-04-23T23:15:11.276154270Z" level=info msg="connecting to shim e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7" address="unix:///run/containerd/s/c629e07e6aac5292494b5c6ece8eb6546f07ed663aa196bde58e46b8d0948f8f" protocol=ttrpc version=3 Apr 23 23:15:11.298340 systemd[1]: Started cri-containerd-e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7.scope - libcontainer container e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7. Apr 23 23:15:11.377538 containerd[1534]: time="2026-04-23T23:15:11.377493840Z" level=info msg="StartContainer for \"e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7\" returns successfully" Apr 23 23:15:11.934456 containerd[1534]: time="2026-04-23T23:15:11.934359932Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 23 23:15:11.938688 systemd[1]: cri-containerd-e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7.scope: Deactivated successfully. Apr 23 23:15:11.938973 systemd[1]: cri-containerd-e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7.scope: Consumed 542ms CPU time, 188M memory peak, 165.6M written to disk. Apr 23 23:15:11.944040 containerd[1534]: time="2026-04-23T23:15:11.943928321Z" level=info msg="received container exit event container_id:\"e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7\" id:\"e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7\" pid:3520 exited_at:{seconds:1776986111 nanos:943720039}" Apr 23 23:15:11.967203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e73ecc58c4b46fb3b7ed5e66fd54f98d1f8eb9b9c24ac851e5fb15b2a2c107b7-rootfs.mount: Deactivated successfully. Apr 23 23:15:12.006418 kubelet[2777]: I0423 23:15:12.006377 2777 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 23 23:15:12.082183 systemd[1]: Created slice kubepods-burstable-podcdb3e0df_08ea_4ff4_92fd_2432c4aa57f0.slice - libcontainer container kubepods-burstable-podcdb3e0df_08ea_4ff4_92fd_2432c4aa57f0.slice. Apr 23 23:15:12.096906 kubelet[2777]: I0423 23:15:12.096411 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d44be6e-c374-47b1-8ba7-226aa56ef63a-tigera-ca-bundle\") pod \"calico-kube-controllers-5666744c8-g96t2\" (UID: \"6d44be6e-c374-47b1-8ba7-226aa56ef63a\") " pod="calico-system/calico-kube-controllers-5666744c8-g96t2" Apr 23 23:15:12.098366 kubelet[2777]: I0423 23:15:12.098006 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87tjn\" (UniqueName: \"kubernetes.io/projected/85fdbd64-8890-4dc6-be56-3ade0f613dbb-kube-api-access-87tjn\") pod \"calico-apiserver-578f5c6d65-vnzn4\" (UID: \"85fdbd64-8890-4dc6-be56-3ade0f613dbb\") " pod="calico-system/calico-apiserver-578f5c6d65-vnzn4" Apr 23 23:15:12.098366 kubelet[2777]: I0423 23:15:12.098068 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf-goldmane-ca-bundle\") pod \"goldmane-57885fdd4c-hmcnz\" (UID: \"ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf\") " pod="calico-system/goldmane-57885fdd4c-hmcnz" Apr 23 23:15:12.098366 kubelet[2777]: I0423 23:15:12.098101 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/394613e9-603e-4a59-bb0b-0635eff0d31b-config-volume\") pod \"coredns-674b8bbfcf-zlk5v\" (UID: \"394613e9-603e-4a59-bb0b-0635eff0d31b\") " pod="kube-system/coredns-674b8bbfcf-zlk5v" Apr 23 23:15:12.098756 kubelet[2777]: I0423 23:15:12.098720 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8frwz\" (UniqueName: \"kubernetes.io/projected/cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0-kube-api-access-8frwz\") pod \"coredns-674b8bbfcf-8dwsq\" (UID: \"cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0\") " pod="kube-system/coredns-674b8bbfcf-8dwsq" Apr 23 23:15:12.098931 kubelet[2777]: I0423 23:15:12.098917 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktmmg\" (UniqueName: \"kubernetes.io/projected/394613e9-603e-4a59-bb0b-0635eff0d31b-kube-api-access-ktmmg\") pod \"coredns-674b8bbfcf-zlk5v\" (UID: \"394613e9-603e-4a59-bb0b-0635eff0d31b\") " pod="kube-system/coredns-674b8bbfcf-zlk5v" Apr 23 23:15:12.099149 kubelet[2777]: I0423 23:15:12.099003 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/85fdbd64-8890-4dc6-be56-3ade0f613dbb-calico-apiserver-certs\") pod \"calico-apiserver-578f5c6d65-vnzn4\" (UID: \"85fdbd64-8890-4dc6-be56-3ade0f613dbb\") " pod="calico-system/calico-apiserver-578f5c6d65-vnzn4" Apr 23 23:15:12.099654 kubelet[2777]: I0423 23:15:12.099528 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9vt\" (UniqueName: \"kubernetes.io/projected/ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf-kube-api-access-4r9vt\") pod \"goldmane-57885fdd4c-hmcnz\" (UID: \"ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf\") " pod="calico-system/goldmane-57885fdd4c-hmcnz" Apr 23 23:15:12.101151 kubelet[2777]: I0423 23:15:12.099788 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp7sk\" (UniqueName: \"kubernetes.io/projected/6d44be6e-c374-47b1-8ba7-226aa56ef63a-kube-api-access-sp7sk\") pod \"calico-kube-controllers-5666744c8-g96t2\" (UID: \"6d44be6e-c374-47b1-8ba7-226aa56ef63a\") " pod="calico-system/calico-kube-controllers-5666744c8-g96t2" Apr 23 23:15:12.101496 kubelet[2777]: I0423 23:15:12.101130 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf-config\") pod \"goldmane-57885fdd4c-hmcnz\" (UID: \"ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf\") " pod="calico-system/goldmane-57885fdd4c-hmcnz" Apr 23 23:15:12.101496 kubelet[2777]: I0423 23:15:12.101471 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0-config-volume\") pod \"coredns-674b8bbfcf-8dwsq\" (UID: \"cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0\") " pod="kube-system/coredns-674b8bbfcf-8dwsq" Apr 23 23:15:12.101946 kubelet[2777]: I0423 23:15:12.101740 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf-goldmane-key-pair\") pod \"goldmane-57885fdd4c-hmcnz\" (UID: \"ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf\") " pod="calico-system/goldmane-57885fdd4c-hmcnz" Apr 23 23:15:12.105984 systemd[1]: Created slice kubepods-besteffort-pod6d44be6e_c374_47b1_8ba7_226aa56ef63a.slice - libcontainer container kubepods-besteffort-pod6d44be6e_c374_47b1_8ba7_226aa56ef63a.slice. Apr 23 23:15:12.126009 systemd[1]: Created slice kubepods-burstable-pod394613e9_603e_4a59_bb0b_0635eff0d31b.slice - libcontainer container kubepods-burstable-pod394613e9_603e_4a59_bb0b_0635eff0d31b.slice. Apr 23 23:15:12.141264 systemd[1]: Created slice kubepods-besteffort-pod94bc7e7e_a1f5_4dfd_903b_9703266dad8a.slice - libcontainer container kubepods-besteffort-pod94bc7e7e_a1f5_4dfd_903b_9703266dad8a.slice. Apr 23 23:15:12.153650 systemd[1]: Created slice kubepods-besteffort-podac3e0208_6c86_4f8c_84f6_c1dcb0e2abdf.slice - libcontainer container kubepods-besteffort-podac3e0208_6c86_4f8c_84f6_c1dcb0e2abdf.slice. Apr 23 23:15:12.180634 systemd[1]: Created slice kubepods-besteffort-poddd1b9eb6_a9bc_42a8_8140_b36f5efd125a.slice - libcontainer container kubepods-besteffort-poddd1b9eb6_a9bc_42a8_8140_b36f5efd125a.slice. Apr 23 23:15:12.185368 containerd[1534]: time="2026-04-23T23:15:12.185220692Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 23 23:15:12.196050 systemd[1]: Created slice kubepods-besteffort-pod85fdbd64_8890_4dc6_be56_3ade0f613dbb.slice - libcontainer container kubepods-besteffort-pod85fdbd64_8890_4dc6_be56_3ade0f613dbb.slice. Apr 23 23:15:12.204846 kubelet[2777]: I0423 23:15:12.204780 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4kk6\" (UniqueName: \"kubernetes.io/projected/94bc7e7e-a1f5-4dfd-903b-9703266dad8a-kube-api-access-m4kk6\") pod \"calico-apiserver-578f5c6d65-g6dpq\" (UID: \"94bc7e7e-a1f5-4dfd-903b-9703266dad8a\") " pod="calico-system/calico-apiserver-578f5c6d65-g6dpq" Apr 23 23:15:12.205262 kubelet[2777]: I0423 23:15:12.205233 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58szj\" (UniqueName: \"kubernetes.io/projected/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-kube-api-access-58szj\") pod \"whisker-554fbbd85-5jspr\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " pod="calico-system/whisker-554fbbd85-5jspr" Apr 23 23:15:12.208219 kubelet[2777]: I0423 23:15:12.207166 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94bc7e7e-a1f5-4dfd-903b-9703266dad8a-calico-apiserver-certs\") pod \"calico-apiserver-578f5c6d65-g6dpq\" (UID: \"94bc7e7e-a1f5-4dfd-903b-9703266dad8a\") " pod="calico-system/calico-apiserver-578f5c6d65-g6dpq" Apr 23 23:15:12.215066 kubelet[2777]: I0423 23:15:12.213840 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-nginx-config\") pod \"whisker-554fbbd85-5jspr\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " pod="calico-system/whisker-554fbbd85-5jspr" Apr 23 23:15:12.215066 kubelet[2777]: I0423 23:15:12.214044 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-backend-key-pair\") pod \"whisker-554fbbd85-5jspr\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " pod="calico-system/whisker-554fbbd85-5jspr" Apr 23 23:15:12.215066 kubelet[2777]: I0423 23:15:12.214070 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-ca-bundle\") pod \"whisker-554fbbd85-5jspr\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " pod="calico-system/whisker-554fbbd85-5jspr" Apr 23 23:15:12.216367 containerd[1534]: time="2026-04-23T23:15:12.216298835Z" level=info msg="Container 2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:12.244012 containerd[1534]: time="2026-04-23T23:15:12.243012347Z" level=info msg="CreateContainer within sandbox \"752a2f218fe984286bd043bb01ec2a8a6395db4736a28418216fcc2b57baa0c9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde\"" Apr 23 23:15:12.245519 containerd[1534]: time="2026-04-23T23:15:12.245483124Z" level=info msg="StartContainer for \"2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde\"" Apr 23 23:15:12.250264 containerd[1534]: time="2026-04-23T23:15:12.250184638Z" level=info msg="connecting to shim 2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde" address="unix:///run/containerd/s/c629e07e6aac5292494b5c6ece8eb6546f07ed663aa196bde58e46b8d0948f8f" protocol=ttrpc version=3 Apr 23 23:15:12.319216 systemd[1]: Started cri-containerd-2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde.scope - libcontainer container 2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde. Apr 23 23:15:12.401015 containerd[1534]: time="2026-04-23T23:15:12.400968679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8dwsq,Uid:cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0,Namespace:kube-system,Attempt:0,}" Apr 23 23:15:12.419321 containerd[1534]: time="2026-04-23T23:15:12.419265570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5666744c8-g96t2,Uid:6d44be6e-c374-47b1-8ba7-226aa56ef63a,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:12.439129 containerd[1534]: time="2026-04-23T23:15:12.438653389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlk5v,Uid:394613e9-603e-4a59-bb0b-0635eff0d31b,Namespace:kube-system,Attempt:0,}" Apr 23 23:15:12.441453 containerd[1534]: time="2026-04-23T23:15:12.441415009Z" level=info msg="StartContainer for \"2754f2d9f118ea340d3f65a731354b8c79fea6e08ae6e17231db951c9620bdde\" returns successfully" Apr 23 23:15:12.451434 containerd[1534]: time="2026-04-23T23:15:12.451377320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578f5c6d65-g6dpq,Uid:94bc7e7e-a1f5-4dfd-903b-9703266dad8a,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:12.463776 containerd[1534]: time="2026-04-23T23:15:12.463412526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-hmcnz,Uid:ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:12.491359 containerd[1534]: time="2026-04-23T23:15:12.491308606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554fbbd85-5jspr,Uid:dd1b9eb6-a9bc-42a8-8140-b36f5efd125a,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:12.509324 containerd[1534]: time="2026-04-23T23:15:12.509261895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578f5c6d65-vnzn4,Uid:85fdbd64-8890-4dc6-be56-3ade0f613dbb,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:12.715683 containerd[1534]: time="2026-04-23T23:15:12.713009916Z" level=error msg="Failed to destroy network for sandbox \"955537a5e9863429f7cf63547946a1a792ba5cf7274e1ca3eef5b9484a560eb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:12.739713 containerd[1534]: time="2026-04-23T23:15:12.739436905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5666744c8-g96t2,Uid:6d44be6e-c374-47b1-8ba7-226aa56ef63a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"955537a5e9863429f7cf63547946a1a792ba5cf7274e1ca3eef5b9484a560eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:12.740270 kubelet[2777]: E0423 23:15:12.740222 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955537a5e9863429f7cf63547946a1a792ba5cf7274e1ca3eef5b9484a560eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:12.740360 kubelet[2777]: E0423 23:15:12.740295 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955537a5e9863429f7cf63547946a1a792ba5cf7274e1ca3eef5b9484a560eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5666744c8-g96t2" Apr 23 23:15:12.740360 kubelet[2777]: E0423 23:15:12.740319 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"955537a5e9863429f7cf63547946a1a792ba5cf7274e1ca3eef5b9484a560eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5666744c8-g96t2" Apr 23 23:15:12.740410 kubelet[2777]: E0423 23:15:12.740365 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5666744c8-g96t2_calico-system(6d44be6e-c374-47b1-8ba7-226aa56ef63a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5666744c8-g96t2_calico-system(6d44be6e-c374-47b1-8ba7-226aa56ef63a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"955537a5e9863429f7cf63547946a1a792ba5cf7274e1ca3eef5b9484a560eb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5666744c8-g96t2" podUID="6d44be6e-c374-47b1-8ba7-226aa56ef63a" Apr 23 23:15:12.944287 systemd[1]: Created slice kubepods-besteffort-podc8088906_90c9_4538_98fa_60ada979bc32.slice - libcontainer container kubepods-besteffort-podc8088906_90c9_4538_98fa_60ada979bc32.slice. Apr 23 23:15:12.957563 containerd[1534]: time="2026-04-23T23:15:12.957338587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w65pl,Uid:c8088906-90c9-4538-98fa-60ada979bc32,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:12.957 [INFO][3719] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:12.957 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" iface="eth0" netns="/var/run/netns/cni-dc31de77-2a84-e3fa-fa1f-d4193eb38f57" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:12.958 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" iface="eth0" netns="/var/run/netns/cni-dc31de77-2a84-e3fa-fa1f-d4193eb38f57" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:12.958 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" iface="eth0" netns="/var/run/netns/cni-dc31de77-2a84-e3fa-fa1f-d4193eb38f57" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:12.958 [INFO][3719] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:12.959 [INFO][3719] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:13.027 [INFO][3785] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" HandleID="k8s-pod-network.d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Workload="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:13.027 [INFO][3785] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.065404 containerd[1534]: 2026-04-23 23:15:13.027 [INFO][3785] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.065756 containerd[1534]: 2026-04-23 23:15:13.047 [WARNING][3785] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" HandleID="k8s-pod-network.d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Workload="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:13.065756 containerd[1534]: 2026-04-23 23:15:13.047 [INFO][3785] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" HandleID="k8s-pod-network.d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Workload="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:13.065756 containerd[1534]: 2026-04-23 23:15:13.051 [INFO][3785] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.065756 containerd[1534]: 2026-04-23 23:15:13.059 [INFO][3719] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2" Apr 23 23:15:13.070005 containerd[1534]: time="2026-04-23T23:15:13.069877991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-hmcnz,Uid:ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.071271 kubelet[2777]: E0423 23:15:13.070236 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.071271 kubelet[2777]: E0423 23:15:13.070295 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-hmcnz" Apr 23 23:15:13.071271 kubelet[2777]: E0423 23:15:13.070313 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-hmcnz" Apr 23 23:15:13.071639 kubelet[2777]: E0423 23:15:13.070357 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-57885fdd4c-hmcnz_calico-system(ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-57885fdd4c-hmcnz_calico-system(ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d493c5aaf0de3da459249e27de34a84424c0e6e83edbdea026f47d1bf46cb6a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-57885fdd4c-hmcnz" podUID="ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:12.916 [INFO][3720] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:12.916 [INFO][3720] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" iface="eth0" netns="/var/run/netns/cni-799a8647-ed05-ff04-7036-a4ebd5a1c286" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:12.917 [INFO][3720] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" iface="eth0" netns="/var/run/netns/cni-799a8647-ed05-ff04-7036-a4ebd5a1c286" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:12.917 [INFO][3720] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" iface="eth0" netns="/var/run/netns/cni-799a8647-ed05-ff04-7036-a4ebd5a1c286" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:12.917 [INFO][3720] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:12.918 [INFO][3720] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:13.026 [INFO][3770] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" HandleID="k8s-pod-network.0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:13.035 [INFO][3770] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.110053 containerd[1534]: 2026-04-23 23:15:13.053 [INFO][3770] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.110342 containerd[1534]: 2026-04-23 23:15:13.080 [WARNING][3770] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" HandleID="k8s-pod-network.0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.110342 containerd[1534]: 2026-04-23 23:15:13.080 [INFO][3770] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" HandleID="k8s-pod-network.0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.110342 containerd[1534]: 2026-04-23 23:15:13.083 [INFO][3770] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.110342 containerd[1534]: 2026-04-23 23:15:13.088 [INFO][3720] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5" Apr 23 23:15:13.117928 containerd[1534]: time="2026-04-23T23:15:13.117616812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578f5c6d65-g6dpq,Uid:94bc7e7e-a1f5-4dfd-903b-9703266dad8a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.119688 kubelet[2777]: E0423 23:15:13.119575 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.119688 kubelet[2777]: E0423 23:15:13.119639 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-578f5c6d65-g6dpq" Apr 23 23:15:13.119688 kubelet[2777]: E0423 23:15:13.119666 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-578f5c6d65-g6dpq" Apr 23 23:15:13.120469 kubelet[2777]: E0423 23:15:13.119716 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-578f5c6d65-g6dpq_calico-system(94bc7e7e-a1f5-4dfd-903b-9703266dad8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-578f5c6d65-g6dpq_calico-system(94bc7e7e-a1f5-4dfd-903b-9703266dad8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ca138d88ca2c3f61f05b2e4734b2981e19f117b8427bb6ad37f195b0b25f2f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-578f5c6d65-g6dpq" podUID="94bc7e7e-a1f5-4dfd-903b-9703266dad8a" Apr 23 23:15:13.177941 containerd[1534]: time="2026-04-23T23:15:13.177896962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578f5c6d65-g6dpq,Uid:94bc7e7e-a1f5-4dfd-903b-9703266dad8a,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:13.181502 containerd[1534]: time="2026-04-23T23:15:13.181381867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-hmcnz,Uid:ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:12.955 [INFO][3717] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:12.958 [INFO][3717] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" iface="eth0" netns="/var/run/netns/cni-a01e3aaa-a98a-6495-eb85-6ba58d5bd532" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:12.959 [INFO][3717] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" iface="eth0" netns="/var/run/netns/cni-a01e3aaa-a98a-6495-eb85-6ba58d5bd532" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:12.959 [INFO][3717] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" iface="eth0" netns="/var/run/netns/cni-a01e3aaa-a98a-6495-eb85-6ba58d5bd532" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:12.960 [INFO][3717] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:12.960 [INFO][3717] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:13.127 [INFO][3784] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" HandleID="k8s-pod-network.be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:13.129 [INFO][3784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.187744 containerd[1534]: 2026-04-23 23:15:13.129 [INFO][3784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.188075 containerd[1534]: 2026-04-23 23:15:13.163 [WARNING][3784] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" HandleID="k8s-pod-network.be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:13.188075 containerd[1534]: 2026-04-23 23:15:13.163 [INFO][3784] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" HandleID="k8s-pod-network.be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:13.188075 containerd[1534]: 2026-04-23 23:15:13.169 [INFO][3784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.188075 containerd[1534]: 2026-04-23 23:15:13.173 [INFO][3717] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924" Apr 23 23:15:13.211587 containerd[1534]: time="2026-04-23T23:15:13.211454881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8dwsq,Uid:cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.212186 kubelet[2777]: E0423 23:15:13.211988 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.212270 kubelet[2777]: E0423 23:15:13.212203 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8dwsq" Apr 23 23:15:13.212270 kubelet[2777]: E0423 23:15:13.212230 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8dwsq" Apr 23 23:15:13.212925 kubelet[2777]: E0423 23:15:13.212877 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8dwsq_kube-system(cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8dwsq_kube-system(cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be725468bf10d48cc69fa3208b50252479e6bb39651de4d7341fb5f2e2454924\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8dwsq" podUID="cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0" Apr 23 23:15:13.224827 kubelet[2777]: I0423 23:15:13.224688 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lpdxj" podStartSLOduration=3.362322607 podStartE2EDuration="19.224668455s" podCreationTimestamp="2026-04-23 23:14:54 +0000 UTC" firstStartedPulling="2026-04-23 23:14:55.373704733 +0000 UTC m=+19.582932715" lastFinishedPulling="2026-04-23 23:15:11.236050541 +0000 UTC m=+35.445278563" observedRunningTime="2026-04-23 23:15:13.223991011 +0000 UTC m=+37.433219033" watchObservedRunningTime="2026-04-23 23:15:13.224668455 +0000 UTC m=+37.433896437" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:12.929 [INFO][3740] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:12.930 [INFO][3740] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" iface="eth0" netns="/var/run/netns/cni-71c12d2a-22be-7a79-c2ee-32f289aad235" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:12.937 [INFO][3740] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" iface="eth0" netns="/var/run/netns/cni-71c12d2a-22be-7a79-c2ee-32f289aad235" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:12.939 [INFO][3740] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" iface="eth0" netns="/var/run/netns/cni-71c12d2a-22be-7a79-c2ee-32f289aad235" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:12.939 [INFO][3740] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:12.939 [INFO][3740] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:13.143 [INFO][3781] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" HandleID="k8s-pod-network.aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:13.143 [INFO][3781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.241160 containerd[1534]: 2026-04-23 23:15:13.168 [INFO][3781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.242959 containerd[1534]: 2026-04-23 23:15:13.192 [WARNING][3781] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" HandleID="k8s-pod-network.aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:13.242959 containerd[1534]: 2026-04-23 23:15:13.192 [INFO][3781] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" HandleID="k8s-pod-network.aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:13.242959 containerd[1534]: 2026-04-23 23:15:13.195 [INFO][3781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.242959 containerd[1534]: 2026-04-23 23:15:13.211 [INFO][3740] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6" Apr 23 23:15:13.243691 containerd[1534]: time="2026-04-23T23:15:13.243634871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlk5v,Uid:394613e9-603e-4a59-bb0b-0635eff0d31b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.244501 kubelet[2777]: E0423 23:15:13.244458 2777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 23 23:15:13.244758 kubelet[2777]: E0423 23:15:13.244518 2777 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zlk5v" Apr 23 23:15:13.244758 kubelet[2777]: E0423 23:15:13.244560 2777 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zlk5v" Apr 23 23:15:13.244758 kubelet[2777]: E0423 23:15:13.244606 2777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zlk5v_kube-system(394613e9-603e-4a59-bb0b-0635eff0d31b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zlk5v_kube-system(394613e9-603e-4a59-bb0b-0635eff0d31b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aad49400f9e41ad2f89aa8d6be3525f8e2748d0d59776e4819457989edd2deb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zlk5v" podUID="394613e9-603e-4a59-bb0b-0635eff0d31b" Apr 23 23:15:13.469716 systemd-networkd[1425]: cali0348a8473e6: Link UP Apr 23 23:15:13.472182 systemd-networkd[1425]: cali0348a8473e6: Gained carrier Apr 23 23:15:13.520701 containerd[1534]: 2026-04-23 23:15:12.770 [ERROR][3661] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:13.520701 containerd[1534]: 2026-04-23 23:15:12.853 [INFO][3661] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0 calico-apiserver-578f5c6d65- calico-system 85fdbd64-8890-4dc6-be56-3ade0f613dbb 841 0 2026-04-23 23:14:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:578f5c6d65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b calico-apiserver-578f5c6d65-vnzn4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0348a8473e6 [] [] }} ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-" Apr 23 23:15:13.520701 containerd[1534]: 2026-04-23 23:15:12.855 [INFO][3661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.520701 containerd[1534]: 2026-04-23 23:15:13.102 [INFO][3766] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" HandleID="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.151 [INFO][3766] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" HandleID="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"calico-apiserver-578f5c6d65-vnzn4", "timestamp":"2026-04-23 23:15:13.102747586 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000356580)} Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.151 [INFO][3766] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.197 [INFO][3766] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.202 [INFO][3766] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.280 [INFO][3766] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.334 [INFO][3766] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.349 [INFO][3766] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.355 [INFO][3766] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521305 containerd[1534]: 2026-04-23 23:15:13.360 [INFO][3766] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.361 [INFO][3766] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.366 [INFO][3766] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677 Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.378 [INFO][3766] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.389 [INFO][3766] ipam/ipam.go 1276: Failed to update block block=192.168.114.0/26 error=update conflict: IPAMBlock(192-168-114-0-26) handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.412 [INFO][3766] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.420 [INFO][3766] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677 Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.430 [INFO][3766] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.439 [INFO][3766] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.1/26] block=192.168.114.0/26 handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521491 containerd[1534]: 2026-04-23 23:15:13.441 [INFO][3766] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.1/26] handle="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.521712 containerd[1534]: 2026-04-23 23:15:13.441 [INFO][3766] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.521712 containerd[1534]: 2026-04-23 23:15:13.441 [INFO][3766] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.1/26] IPv6=[] ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" HandleID="k8s-pod-network.1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.521753 containerd[1534]: 2026-04-23 23:15:13.448 [INFO][3661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0", GenerateName:"calico-apiserver-578f5c6d65-", Namespace:"calico-system", SelfLink:"", UID:"85fdbd64-8890-4dc6-be56-3ade0f613dbb", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578f5c6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"calico-apiserver-578f5c6d65-vnzn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0348a8473e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.521753 containerd[1534]: 2026-04-23 23:15:13.450 [INFO][3661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.1/32] ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.521753 containerd[1534]: 2026-04-23 23:15:13.451 [INFO][3661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0348a8473e6 ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.521753 containerd[1534]: 2026-04-23 23:15:13.476 [INFO][3661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.521753 containerd[1534]: 2026-04-23 23:15:13.479 [INFO][3661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0", GenerateName:"calico-apiserver-578f5c6d65-", Namespace:"calico-system", SelfLink:"", UID:"85fdbd64-8890-4dc6-be56-3ade0f613dbb", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578f5c6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677", Pod:"calico-apiserver-578f5c6d65-vnzn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0348a8473e6", MAC:"f2:00:15:dc:67:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.521753 containerd[1534]: 2026-04-23 23:15:13.515 [INFO][3661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-vnzn4" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--vnzn4-eth0" Apr 23 23:15:13.551485 systemd-networkd[1425]: calic577ee08614: Link UP Apr 23 23:15:13.551741 systemd-networkd[1425]: calic577ee08614: Gained carrier Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:12.737 [ERROR][3650] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:12.872 [INFO][3650] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0 whisker-554fbbd85- calico-system dd1b9eb6-a9bc-42a8-8140-b36f5efd125a 849 0 2026-04-23 23:14:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:554fbbd85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b whisker-554fbbd85-5jspr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic577ee08614 [] [] }} ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:12.872 [INFO][3650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.112 [INFO][3768] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.160 [INFO][3768] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a4090), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"whisker-554fbbd85-5jspr", "timestamp":"2026-04-23 23:15:13.112762417 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000cdce0)} Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.160 [INFO][3768] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.441 [INFO][3768] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.441 [INFO][3768] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.450 [INFO][3768] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.461 [INFO][3768] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.477 [INFO][3768] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.481 [INFO][3768] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.491 [INFO][3768] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.492 [INFO][3768] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.501 [INFO][3768] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820 Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.524 [INFO][3768] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.540 [INFO][3768] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.2/26] block=192.168.114.0/26 handle="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.540 [INFO][3768] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.2/26] handle="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.592489 containerd[1534]: 2026-04-23 23:15:13.540 [INFO][3768] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.594115 containerd[1534]: 2026-04-23 23:15:13.540 [INFO][3768] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.2/26] IPv6=[] ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.594115 containerd[1534]: 2026-04-23 23:15:13.545 [INFO][3650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0", GenerateName:"whisker-554fbbd85-", Namespace:"calico-system", SelfLink:"", UID:"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"554fbbd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"whisker-554fbbd85-5jspr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic577ee08614", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.594115 containerd[1534]: 2026-04-23 23:15:13.545 [INFO][3650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.2/32] ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.594115 containerd[1534]: 2026-04-23 23:15:13.545 [INFO][3650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic577ee08614 ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.594115 containerd[1534]: 2026-04-23 23:15:13.552 [INFO][3650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.594115 containerd[1534]: 2026-04-23 23:15:13.557 [INFO][3650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0", GenerateName:"whisker-554fbbd85-", Namespace:"calico-system", SelfLink:"", UID:"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"554fbbd85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820", Pod:"whisker-554fbbd85-5jspr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic577ee08614", MAC:"56:32:7c:9a:86:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.594396 containerd[1534]: 2026-04-23 23:15:13.583 [INFO][3650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Namespace="calico-system" Pod="whisker-554fbbd85-5jspr" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:13.631385 containerd[1534]: time="2026-04-23T23:15:13.630724112Z" level=info msg="connecting to shim 1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677" address="unix:///run/containerd/s/e683712e95184a12f62de2340b0eddacdf748c0cc6a8a1ed79c86512ff5034ef" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:13.678324 containerd[1534]: time="2026-04-23T23:15:13.678184891Z" level=info msg="connecting to shim 7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" address="unix:///run/containerd/s/17da84dad486ab313f628ef66b6bd3c6ee88ce78d36ae0df818e28dec212fd2e" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:13.730297 systemd[1]: Started cri-containerd-1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677.scope - libcontainer container 1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677. Apr 23 23:15:13.737142 systemd-networkd[1425]: calia4aef94e781: Link UP Apr 23 23:15:13.741292 systemd-networkd[1425]: calia4aef94e781: Gained carrier Apr 23 23:15:13.768099 systemd[1]: Started cri-containerd-7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820.scope - libcontainer container 7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820. Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.106 [ERROR][3786] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.163 [INFO][3786] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0 csi-node-driver- calico-system c8088906-90c9-4538-98fa-60ada979bc32 701 0 2026-04-23 23:14:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:74865c565 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b csi-node-driver-w65pl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia4aef94e781 [] [] }} ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.163 [INFO][3786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.330 [INFO][3821] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" HandleID="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Workload="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.358 [INFO][3821] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" HandleID="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Workload="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ee1c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"csi-node-driver-w65pl", "timestamp":"2026-04-23 23:15:13.330143088 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000c62c0)} Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.358 [INFO][3821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.540 [INFO][3821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.540 [INFO][3821] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.551 [INFO][3821] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.587 [INFO][3821] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.613 [INFO][3821] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.628 [INFO][3821] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.638 [INFO][3821] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.638 [INFO][3821] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.652 [INFO][3821] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.673 [INFO][3821] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.715 [INFO][3821] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.3/26] block=192.168.114.0/26 handle="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.715 [INFO][3821] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.3/26] handle="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.787465 containerd[1534]: 2026-04-23 23:15:13.715 [INFO][3821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.788003 containerd[1534]: 2026-04-23 23:15:13.715 [INFO][3821] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.3/26] IPv6=[] ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" HandleID="k8s-pod-network.4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Workload="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.788003 containerd[1534]: 2026-04-23 23:15:13.723 [INFO][3786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8088906-90c9-4538-98fa-60ada979bc32", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"csi-node-driver-w65pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia4aef94e781", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.788003 containerd[1534]: 2026-04-23 23:15:13.724 [INFO][3786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.3/32] ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.788003 containerd[1534]: 2026-04-23 23:15:13.724 [INFO][3786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4aef94e781 ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.788003 containerd[1534]: 2026-04-23 23:15:13.749 [INFO][3786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.788003 containerd[1534]: 2026-04-23 23:15:13.752 [INFO][3786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c8088906-90c9-4538-98fa-60ada979bc32", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c", Pod:"csi-node-driver-w65pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia4aef94e781", MAC:"b6:a9:52:cf:fe:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.788421 containerd[1534]: 2026-04-23 23:15:13.781 [INFO][3786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" Namespace="calico-system" Pod="csi-node-driver-w65pl" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-csi--node--driver--w65pl-eth0" Apr 23 23:15:13.833047 containerd[1534]: time="2026-04-23T23:15:13.832699753Z" level=info msg="connecting to shim 4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c" address="unix:///run/containerd/s/217c8a1946844a8903a8c1933333210f115393ca4877ba876e0660d41b33f1e6" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:13.868774 systemd-networkd[1425]: cali769d850eeb1: Link UP Apr 23 23:15:13.872797 systemd-networkd[1425]: cali769d850eeb1: Gained carrier Apr 23 23:15:13.913216 systemd[1]: Started cri-containerd-4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c.scope - libcontainer container 4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c. Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.283 [ERROR][3827] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.328 [INFO][3827] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0 calico-apiserver-578f5c6d65- calico-system 94bc7e7e-a1f5-4dfd-903b-9703266dad8a 858 0 2026-04-23 23:14:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:578f5c6d65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b calico-apiserver-578f5c6d65-g6dpq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali769d850eeb1 [] [] }} ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.328 [INFO][3827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.416 [INFO][3857] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" HandleID="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.450 [INFO][3857] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" HandleID="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000394100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"calico-apiserver-578f5c6d65-g6dpq", "timestamp":"2026-04-23 23:15:13.416341463 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400039a580)} Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.450 [INFO][3857] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.716 [INFO][3857] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.718 [INFO][3857] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.745 [INFO][3857] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.766 [INFO][3857] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.783 [INFO][3857] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.790 [INFO][3857] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.797 [INFO][3857] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.798 [INFO][3857] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.804 [INFO][3857] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203 Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.817 [INFO][3857] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.849 [INFO][3857] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.4/26] block=192.168.114.0/26 handle="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.849 [INFO][3857] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.4/26] handle="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:13.915146 containerd[1534]: 2026-04-23 23:15:13.850 [INFO][3857] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:13.916646 containerd[1534]: 2026-04-23 23:15:13.850 [INFO][3857] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.4/26] IPv6=[] ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" HandleID="k8s-pod-network.dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.916646 containerd[1534]: 2026-04-23 23:15:13.857 [INFO][3827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0", GenerateName:"calico-apiserver-578f5c6d65-", Namespace:"calico-system", SelfLink:"", UID:"94bc7e7e-a1f5-4dfd-903b-9703266dad8a", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578f5c6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"calico-apiserver-578f5c6d65-g6dpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali769d850eeb1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.916646 containerd[1534]: 2026-04-23 23:15:13.857 [INFO][3827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.4/32] ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.916646 containerd[1534]: 2026-04-23 23:15:13.859 [INFO][3827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali769d850eeb1 ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.916646 containerd[1534]: 2026-04-23 23:15:13.872 [INFO][3827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.916869 containerd[1534]: 2026-04-23 23:15:13.878 [INFO][3827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0", GenerateName:"calico-apiserver-578f5c6d65-", Namespace:"calico-system", SelfLink:"", UID:"94bc7e7e-a1f5-4dfd-903b-9703266dad8a", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"578f5c6d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203", Pod:"calico-apiserver-578f5c6d65-g6dpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali769d850eeb1", MAC:"ae:a6:7b:28:d7:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:13.916869 containerd[1534]: 2026-04-23 23:15:13.909 [INFO][3827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" Namespace="calico-system" Pod="calico-apiserver-578f5c6d65-g6dpq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--apiserver--578f5c6d65--g6dpq-eth0" Apr 23 23:15:13.965252 systemd-networkd[1425]: cali76392e53bb3: Link UP Apr 23 23:15:13.966589 systemd-networkd[1425]: cali76392e53bb3: Gained carrier Apr 23 23:15:13.994277 containerd[1534]: time="2026-04-23T23:15:13.993816022Z" level=info msg="connecting to shim dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203" address="unix:///run/containerd/s/d90916cef179158929da978577369661822c715472635b739a35fcdc9b79cd83" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.333 [ERROR][3829] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.370 [INFO][3829] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0 goldmane-57885fdd4c- calico-system ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf 861 0 2026-04-23 23:14:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:57885fdd4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b goldmane-57885fdd4c-hmcnz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali76392e53bb3 [] [] }} ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.371 [INFO][3829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.494 [INFO][3874] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" HandleID="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Workload="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.529 [INFO][3874] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" HandleID="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Workload="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004065d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"goldmane-57885fdd4c-hmcnz", "timestamp":"2026-04-23 23:15:13.494194098 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002331e0)} Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.529 [INFO][3874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.850 [INFO][3874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.850 [INFO][3874] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.863 [INFO][3874] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.884 [INFO][3874] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.911 [INFO][3874] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.918 [INFO][3874] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.926 [INFO][3874] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.927 [INFO][3874] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.931 [INFO][3874] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.941 [INFO][3874] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.955 [INFO][3874] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.5/26] block=192.168.114.0/26 handle="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.955 [INFO][3874] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.5/26] handle="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.011065 containerd[1534]: 2026-04-23 23:15:13.956 [INFO][3874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:14.011647 containerd[1534]: 2026-04-23 23:15:13.956 [INFO][3874] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.5/26] IPv6=[] ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" HandleID="k8s-pod-network.77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Workload="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.011647 containerd[1534]: 2026-04-23 23:15:13.959 [INFO][3829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"goldmane-57885fdd4c-hmcnz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76392e53bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:14.011647 containerd[1534]: 2026-04-23 23:15:13.960 [INFO][3829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.5/32] ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.011647 containerd[1534]: 2026-04-23 23:15:13.960 [INFO][3829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76392e53bb3 ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.011647 containerd[1534]: 2026-04-23 23:15:13.968 [INFO][3829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.011647 containerd[1534]: 2026-04-23 23:15:13.975 [INFO][3829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db", Pod:"goldmane-57885fdd4c-hmcnz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.114.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali76392e53bb3", MAC:"a6:d6:b8:fa:f0:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:14.011822 containerd[1534]: 2026-04-23 23:15:14.004 [INFO][3829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" Namespace="calico-system" Pod="goldmane-57885fdd4c-hmcnz" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-goldmane--57885fdd4c--hmcnz-eth0" Apr 23 23:15:14.023335 containerd[1534]: time="2026-04-23T23:15:14.023118551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-554fbbd85-5jspr,Uid:dd1b9eb6-a9bc-42a8-8140-b36f5efd125a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\"" Apr 23 23:15:14.027888 containerd[1534]: time="2026-04-23T23:15:14.027841464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 23 23:15:14.067131 containerd[1534]: time="2026-04-23T23:15:14.067080503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578f5c6d65-vnzn4,Uid:85fdbd64-8890-4dc6-be56-3ade0f613dbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677\"" Apr 23 23:15:14.072482 systemd[1]: Started cri-containerd-dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203.scope - libcontainer container dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203. Apr 23 23:15:14.084952 containerd[1534]: time="2026-04-23T23:15:14.084819549Z" level=info msg="connecting to shim 77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db" address="unix:///run/containerd/s/d2cd8f41cbed58abfe1ae787057de764695d6fa4be439ad69b944b602aaa3d06" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:14.099700 containerd[1534]: time="2026-04-23T23:15:14.099472693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w65pl,Uid:c8088906-90c9-4538-98fa-60ada979bc32,Namespace:calico-system,Attempt:0,} returns sandbox id \"4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c\"" Apr 23 23:15:14.129671 containerd[1534]: time="2026-04-23T23:15:14.129615587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-578f5c6d65-g6dpq,Uid:94bc7e7e-a1f5-4dfd-903b-9703266dad8a,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203\"" Apr 23 23:15:14.144326 systemd[1]: Started cri-containerd-77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db.scope - libcontainer container 77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db. Apr 23 23:15:14.188257 containerd[1534]: time="2026-04-23T23:15:14.187774880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlk5v,Uid:394613e9-603e-4a59-bb0b-0635eff0d31b,Namespace:kube-system,Attempt:0,}" Apr 23 23:15:14.189188 containerd[1534]: time="2026-04-23T23:15:14.189133409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8dwsq,Uid:cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0,Namespace:kube-system,Attempt:0,}" Apr 23 23:15:14.201752 containerd[1534]: time="2026-04-23T23:15:14.201710179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-hmcnz,Uid:ac3e0208-6c86-4f8c-84f6-c1dcb0e2abdf,Namespace:calico-system,Attempt:0,} returns sandbox id \"77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db\"" Apr 23 23:15:14.411354 systemd-networkd[1425]: cali71d1bfcb67f: Link UP Apr 23 23:15:14.412179 systemd-networkd[1425]: cali71d1bfcb67f: Gained carrier Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.261 [ERROR][4167] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.288 [INFO][4167] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0 coredns-674b8bbfcf- kube-system 394613e9-603e-4a59-bb0b-0635eff0d31b 859 0 2026-04-23 23:14:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b coredns-674b8bbfcf-zlk5v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71d1bfcb67f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.288 [INFO][4167] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.341 [INFO][4217] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" HandleID="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.356 [INFO][4217] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" HandleID="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffd20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"coredns-674b8bbfcf-zlk5v", "timestamp":"2026-04-23 23:15:14.341772773 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b5760)} Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.356 [INFO][4217] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.356 [INFO][4217] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.356 [INFO][4217] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.362 [INFO][4217] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.370 [INFO][4217] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.377 [INFO][4217] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.380 [INFO][4217] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.384 [INFO][4217] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.384 [INFO][4217] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.386 [INFO][4217] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982 Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.392 [INFO][4217] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.401 [INFO][4217] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.6/26] block=192.168.114.0/26 handle="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.401 [INFO][4217] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.6/26] handle="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.430631 containerd[1534]: 2026-04-23 23:15:14.401 [INFO][4217] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:14.431240 containerd[1534]: 2026-04-23 23:15:14.401 [INFO][4217] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.6/26] IPv6=[] ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" HandleID="k8s-pod-network.1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.431240 containerd[1534]: 2026-04-23 23:15:14.407 [INFO][4167] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"394613e9-603e-4a59-bb0b-0635eff0d31b", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"coredns-674b8bbfcf-zlk5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71d1bfcb67f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:14.431240 containerd[1534]: 2026-04-23 23:15:14.407 [INFO][4167] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.6/32] ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.431240 containerd[1534]: 2026-04-23 23:15:14.407 [INFO][4167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71d1bfcb67f ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.431240 containerd[1534]: 2026-04-23 23:15:14.412 [INFO][4167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.431446 containerd[1534]: 2026-04-23 23:15:14.412 [INFO][4167] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"394613e9-603e-4a59-bb0b-0635eff0d31b", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982", Pod:"coredns-674b8bbfcf-zlk5v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71d1bfcb67f", MAC:"1a:b6:98:51:7b:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:14.431446 containerd[1534]: 2026-04-23 23:15:14.427 [INFO][4167] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" Namespace="kube-system" Pod="coredns-674b8bbfcf-zlk5v" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--zlk5v-eth0" Apr 23 23:15:14.470842 containerd[1534]: time="2026-04-23T23:15:14.470736449Z" level=info msg="connecting to shim 1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982" address="unix:///run/containerd/s/594fd859f74776a707fb30e3af21dd60e8bed6cbbdce604f88c95ac75e1e0556" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:14.517281 systemd[1]: Started cri-containerd-1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982.scope - libcontainer container 1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982. Apr 23 23:15:14.535558 systemd-networkd[1425]: cali107193410fb: Link UP Apr 23 23:15:14.538325 systemd-networkd[1425]: cali107193410fb: Gained carrier Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.248 [ERROR][4173] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.284 [INFO][4173] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0 coredns-674b8bbfcf- kube-system cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0 860 0 2026-04-23 23:14:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b coredns-674b8bbfcf-8dwsq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali107193410fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.284 [INFO][4173] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.345 [INFO][4215] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" HandleID="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.361 [INFO][4215] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" HandleID="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ffaa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"coredns-674b8bbfcf-8dwsq", "timestamp":"2026-04-23 23:15:14.345232438 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030cf20)} Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.361 [INFO][4215] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.401 [INFO][4215] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.402 [INFO][4215] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.463 [INFO][4215] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.471 [INFO][4215] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.479 [INFO][4215] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.483 [INFO][4215] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.487 [INFO][4215] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.487 [INFO][4215] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.494 [INFO][4215] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.501 [INFO][4215] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.518 [INFO][4215] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.7/26] block=192.168.114.0/26 handle="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.518 [INFO][4215] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.7/26] handle="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:14.562115 containerd[1534]: 2026-04-23 23:15:14.518 [INFO][4215] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:14.562888 containerd[1534]: 2026-04-23 23:15:14.518 [INFO][4215] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.7/26] IPv6=[] ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" HandleID="k8s-pod-network.ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Workload="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.562888 containerd[1534]: 2026-04-23 23:15:14.522 [INFO][4173] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"coredns-674b8bbfcf-8dwsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali107193410fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:14.562888 containerd[1534]: 2026-04-23 23:15:14.522 [INFO][4173] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.7/32] ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.562888 containerd[1534]: 2026-04-23 23:15:14.523 [INFO][4173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali107193410fb ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.562888 containerd[1534]: 2026-04-23 23:15:14.539 [INFO][4173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.563047 containerd[1534]: 2026-04-23 23:15:14.540 [INFO][4173] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a", Pod:"coredns-674b8bbfcf-8dwsq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali107193410fb", MAC:"12:69:21:c3:e4:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:14.563047 containerd[1534]: 2026-04-23 23:15:14.556 [INFO][4173] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" Namespace="kube-system" Pod="coredns-674b8bbfcf-8dwsq" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-coredns--674b8bbfcf--8dwsq-eth0" Apr 23 23:15:14.578242 systemd-networkd[1425]: cali0348a8473e6: Gained IPv6LL Apr 23 23:15:14.595293 containerd[1534]: time="2026-04-23T23:15:14.595249333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zlk5v,Uid:394613e9-603e-4a59-bb0b-0635eff0d31b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982\"" Apr 23 23:15:14.605314 containerd[1534]: time="2026-04-23T23:15:14.605274004Z" level=info msg="CreateContainer within sandbox \"1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:15:14.620411 containerd[1534]: time="2026-04-23T23:15:14.619999028Z" level=info msg="Container ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:14.623319 containerd[1534]: time="2026-04-23T23:15:14.623271812Z" level=info msg="connecting to shim ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a" address="unix:///run/containerd/s/b02123387f29daf61c38d2ef45068dcd1eeda9d9f3fdc1dce5bd420dddc204de" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:14.632053 containerd[1534]: time="2026-04-23T23:15:14.631964953Z" level=info msg="CreateContainer within sandbox \"1e68643ce04331b53b8b8d6cb27f9b44c6c1f08b077d0c7e59c0c214d3602982\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397\"" Apr 23 23:15:14.633648 containerd[1534]: time="2026-04-23T23:15:14.633319243Z" level=info msg="StartContainer for \"ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397\"" Apr 23 23:15:14.636095 containerd[1534]: time="2026-04-23T23:15:14.635728020Z" level=info msg="connecting to shim ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397" address="unix:///run/containerd/s/594fd859f74776a707fb30e3af21dd60e8bed6cbbdce604f88c95ac75e1e0556" protocol=ttrpc version=3 Apr 23 23:15:14.663245 systemd[1]: Started cri-containerd-ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397.scope - libcontainer container ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397. Apr 23 23:15:14.665430 systemd[1]: Started cri-containerd-ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a.scope - libcontainer container ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a. Apr 23 23:15:14.718395 containerd[1534]: time="2026-04-23T23:15:14.718344327Z" level=info msg="StartContainer for \"ca0e0d3565cb7e010972999760be078d2a46963a0a4cf2d1e025eaa958ba0397\" returns successfully" Apr 23 23:15:14.740291 containerd[1534]: time="2026-04-23T23:15:14.740228962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8dwsq,Uid:cdb3e0df-08ea-4ff4-92fd-2432c4aa57f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a\"" Apr 23 23:15:14.754035 containerd[1534]: time="2026-04-23T23:15:14.752671730Z" level=info msg="CreateContainer within sandbox \"ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 23 23:15:14.770216 systemd-networkd[1425]: calic577ee08614: Gained IPv6LL Apr 23 23:15:14.773799 containerd[1534]: time="2026-04-23T23:15:14.773752360Z" level=info msg="Container 7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:14.784304 containerd[1534]: time="2026-04-23T23:15:14.783883712Z" level=info msg="CreateContainer within sandbox \"ca71366f224fec886a541da9e11b3ac72a5e1ca06b96eb97ac3e5fe9ced8c05a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55\"" Apr 23 23:15:14.786427 containerd[1534]: time="2026-04-23T23:15:14.786380090Z" level=info msg="StartContainer for \"7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55\"" Apr 23 23:15:14.788280 containerd[1534]: time="2026-04-23T23:15:14.788239583Z" level=info msg="connecting to shim 7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55" address="unix:///run/containerd/s/b02123387f29daf61c38d2ef45068dcd1eeda9d9f3fdc1dce5bd420dddc204de" protocol=ttrpc version=3 Apr 23 23:15:14.866766 systemd[1]: Started cri-containerd-7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55.scope - libcontainer container 7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55. Apr 23 23:15:14.952836 containerd[1534]: time="2026-04-23T23:15:14.952707991Z" level=info msg="StartContainer for \"7ef1310b8115248d0d0402519aa5fed5165d247af20e454e6d42c87549b79d55\" returns successfully" Apr 23 23:15:15.026272 systemd-networkd[1425]: cali769d850eeb1: Gained IPv6LL Apr 23 23:15:15.218192 systemd-networkd[1425]: cali76392e53bb3: Gained IPv6LL Apr 23 23:15:15.255779 kubelet[2777]: I0423 23:15:15.255567 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zlk5v" podStartSLOduration=35.255548373 podStartE2EDuration="35.255548373s" podCreationTimestamp="2026-04-23 23:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:15:15.255311371 +0000 UTC m=+39.464539393" watchObservedRunningTime="2026-04-23 23:15:15.255548373 +0000 UTC m=+39.464776395" Apr 23 23:15:15.258312 kubelet[2777]: I0423 23:15:15.258258 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8dwsq" podStartSLOduration=35.258238712 podStartE2EDuration="35.258238712s" podCreationTimestamp="2026-04-23 23:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:15:15.225923923 +0000 UTC m=+39.435151945" watchObservedRunningTime="2026-04-23 23:15:15.258238712 +0000 UTC m=+39.467466734" Apr 23 23:15:15.263884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1978933948.mount: Deactivated successfully. Apr 23 23:15:15.730279 systemd-networkd[1425]: calia4aef94e781: Gained IPv6LL Apr 23 23:15:15.903144 systemd-networkd[1425]: vxlan.calico: Link UP Apr 23 23:15:15.903152 systemd-networkd[1425]: vxlan.calico: Gained carrier Apr 23 23:15:16.051206 systemd-networkd[1425]: cali71d1bfcb67f: Gained IPv6LL Apr 23 23:15:16.306908 systemd-networkd[1425]: cali107193410fb: Gained IPv6LL Apr 23 23:15:16.900798 containerd[1534]: time="2026-04-23T23:15:16.900690812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:16.902413 containerd[1534]: time="2026-04-23T23:15:16.902350904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=5896864" Apr 23 23:15:16.903617 containerd[1534]: time="2026-04-23T23:15:16.903560272Z" level=info msg="ImageCreate event name:\"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:16.906496 containerd[1534]: time="2026-04-23T23:15:16.906439412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:16.907119 containerd[1534]: time="2026-04-23T23:15:16.907076417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"8472495\" in 2.878922231s" Apr 23 23:15:16.907119 containerd[1534]: time="2026-04-23T23:15:16.907113737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:a47d4844a7d3a4350ed0ac1bc7a5e68be5c0d8a9b81906debd805ec9c4deec82\"" Apr 23 23:15:16.909294 containerd[1534]: time="2026-04-23T23:15:16.909116351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 23 23:15:16.915211 containerd[1534]: time="2026-04-23T23:15:16.913668063Z" level=info msg="CreateContainer within sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 23 23:15:16.925271 containerd[1534]: time="2026-04-23T23:15:16.925206024Z" level=info msg="Container 313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:16.936996 containerd[1534]: time="2026-04-23T23:15:16.936918907Z" level=info msg="CreateContainer within sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\"" Apr 23 23:15:16.939423 containerd[1534]: time="2026-04-23T23:15:16.939383124Z" level=info msg="StartContainer for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\"" Apr 23 23:15:16.940952 containerd[1534]: time="2026-04-23T23:15:16.940915655Z" level=info msg="connecting to shim 313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335" address="unix:///run/containerd/s/17da84dad486ab313f628ef66b6bd3c6ee88ce78d36ae0df818e28dec212fd2e" protocol=ttrpc version=3 Apr 23 23:15:16.968594 systemd[1]: Started cri-containerd-313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335.scope - libcontainer container 313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335. Apr 23 23:15:17.035469 containerd[1534]: time="2026-04-23T23:15:17.035402559Z" level=info msg="StartContainer for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" returns successfully" Apr 23 23:15:17.459010 systemd-networkd[1425]: vxlan.calico: Gained IPv6LL Apr 23 23:15:19.707072 containerd[1534]: time="2026-04-23T23:15:19.706268041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:19.707615 containerd[1534]: time="2026-04-23T23:15:19.707580250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=42617669" Apr 23 23:15:19.709351 containerd[1534]: time="2026-04-23T23:15:19.709314142Z" level=info msg="ImageCreate event name:\"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:19.714920 containerd[1534]: time="2026-04-23T23:15:19.714858580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:19.716640 containerd[1534]: time="2026-04-23T23:15:19.716586632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 2.807429361s" Apr 23 23:15:19.716838 containerd[1534]: time="2026-04-23T23:15:19.716815514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 23 23:15:19.720360 containerd[1534]: time="2026-04-23T23:15:19.720326738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 23 23:15:19.722698 containerd[1534]: time="2026-04-23T23:15:19.722661074Z" level=info msg="CreateContainer within sandbox \"1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 23 23:15:19.735779 containerd[1534]: time="2026-04-23T23:15:19.735723085Z" level=info msg="Container 11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:19.753331 containerd[1534]: time="2026-04-23T23:15:19.753282927Z" level=info msg="CreateContainer within sandbox \"1043931e9c3036d0677e0c2a83ed00b53f700e29f7b45ea9b4a3955d6679f677\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f\"" Apr 23 23:15:19.755102 containerd[1534]: time="2026-04-23T23:15:19.754830138Z" level=info msg="StartContainer for \"11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f\"" Apr 23 23:15:19.757274 containerd[1534]: time="2026-04-23T23:15:19.756797632Z" level=info msg="connecting to shim 11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f" address="unix:///run/containerd/s/e683712e95184a12f62de2340b0eddacdf748c0cc6a8a1ed79c86512ff5034ef" protocol=ttrpc version=3 Apr 23 23:15:19.791389 systemd[1]: Started cri-containerd-11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f.scope - libcontainer container 11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f. Apr 23 23:15:19.845275 containerd[1534]: time="2026-04-23T23:15:19.845228566Z" level=info msg="StartContainer for \"11e200b97f46ed2fda161af65fe2977893d2a6062c8ec540dea39f7dc33f443f\" returns successfully" Apr 23 23:15:21.232330 kubelet[2777]: I0423 23:15:21.232064 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:15:21.311062 containerd[1534]: time="2026-04-23T23:15:21.310718424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:21.312473 containerd[1534]: time="2026-04-23T23:15:21.311937152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=7895994" Apr 23 23:15:21.313883 containerd[1534]: time="2026-04-23T23:15:21.313839445Z" level=info msg="ImageCreate event name:\"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:21.316316 containerd[1534]: time="2026-04-23T23:15:21.316266262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:21.318014 containerd[1534]: time="2026-04-23T23:15:21.317568711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"10471633\" in 1.597046251s" Apr 23 23:15:21.318307 containerd[1534]: time="2026-04-23T23:15:21.318195475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:c84299759d8605dff0cc2ebb16a8c098e7266501883bb302cd068ecf668128a6\"" Apr 23 23:15:21.320500 containerd[1534]: time="2026-04-23T23:15:21.320232729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 23 23:15:21.325312 containerd[1534]: time="2026-04-23T23:15:21.324816641Z" level=info msg="CreateContainer within sandbox \"4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 23 23:15:21.342439 containerd[1534]: time="2026-04-23T23:15:21.342373402Z" level=info msg="Container 50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:21.347844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938901016.mount: Deactivated successfully. Apr 23 23:15:21.357273 containerd[1534]: time="2026-04-23T23:15:21.357219584Z" level=info msg="CreateContainer within sandbox \"4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065\"" Apr 23 23:15:21.359601 containerd[1534]: time="2026-04-23T23:15:21.358319672Z" level=info msg="StartContainer for \"50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065\"" Apr 23 23:15:21.361116 containerd[1534]: time="2026-04-23T23:15:21.361080491Z" level=info msg="connecting to shim 50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065" address="unix:///run/containerd/s/217c8a1946844a8903a8c1933333210f115393ca4877ba876e0660d41b33f1e6" protocol=ttrpc version=3 Apr 23 23:15:21.391242 systemd[1]: Started cri-containerd-50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065.scope - libcontainer container 50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065. Apr 23 23:15:21.470971 containerd[1534]: time="2026-04-23T23:15:21.470907768Z" level=info msg="StartContainer for \"50c6a532893ab32e2239997d49a9e06e239c01a0244975bb92fb16a6f180f065\" returns successfully" Apr 23 23:15:21.701516 containerd[1534]: time="2026-04-23T23:15:21.701407397Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:21.703047 containerd[1534]: time="2026-04-23T23:15:21.702346684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 23 23:15:21.705266 containerd[1534]: time="2026-04-23T23:15:21.705139863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"45193324\" in 384.864133ms" Apr 23 23:15:21.705266 containerd[1534]: time="2026-04-23T23:15:21.705180703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3c1e1bbd22dcb1019213c98ef14b99d423455fa7cf8c3a9791619bc5605ccefd\"" Apr 23 23:15:21.707764 containerd[1534]: time="2026-04-23T23:15:21.707074556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 23 23:15:21.713053 containerd[1534]: time="2026-04-23T23:15:21.712548074Z" level=info msg="CreateContainer within sandbox \"dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 23 23:15:21.735273 containerd[1534]: time="2026-04-23T23:15:21.735223550Z" level=info msg="Container 3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:21.755795 containerd[1534]: time="2026-04-23T23:15:21.755654011Z" level=info msg="CreateContainer within sandbox \"dfb3692dfaf6cb861ec9e810253cd63c12b92686d78a3854024148b815ad7203\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82\"" Apr 23 23:15:21.757087 containerd[1534]: time="2026-04-23T23:15:21.756909260Z" level=info msg="StartContainer for \"3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82\"" Apr 23 23:15:21.758745 containerd[1534]: time="2026-04-23T23:15:21.758701112Z" level=info msg="connecting to shim 3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82" address="unix:///run/containerd/s/d90916cef179158929da978577369661822c715472635b739a35fcdc9b79cd83" protocol=ttrpc version=3 Apr 23 23:15:21.779227 systemd[1]: Started cri-containerd-3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82.scope - libcontainer container 3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82. Apr 23 23:15:21.827290 containerd[1534]: time="2026-04-23T23:15:21.827254345Z" level=info msg="StartContainer for \"3a8c9b2f2ea94f9515d4077c2aa82441ecd78b065f08fdf029d183daf1079f82\" returns successfully" Apr 23 23:15:22.263058 kubelet[2777]: I0423 23:15:22.262260 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-578f5c6d65-vnzn4" podStartSLOduration=23.609416945 podStartE2EDuration="29.25837507s" podCreationTimestamp="2026-04-23 23:14:53 +0000 UTC" firstStartedPulling="2026-04-23 23:15:14.069766802 +0000 UTC m=+38.278994824" lastFinishedPulling="2026-04-23 23:15:19.718724927 +0000 UTC m=+43.927952949" observedRunningTime="2026-04-23 23:15:20.245841983 +0000 UTC m=+44.455070005" watchObservedRunningTime="2026-04-23 23:15:22.25837507 +0000 UTC m=+46.467603132" Apr 23 23:15:22.264310 kubelet[2777]: I0423 23:15:22.264127 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-578f5c6d65-g6dpq" podStartSLOduration=21.691359892 podStartE2EDuration="29.264067709s" podCreationTimestamp="2026-04-23 23:14:53 +0000 UTC" firstStartedPulling="2026-04-23 23:15:14.133396093 +0000 UTC m=+38.342624075" lastFinishedPulling="2026-04-23 23:15:21.70610387 +0000 UTC m=+45.915331892" observedRunningTime="2026-04-23 23:15:22.257494224 +0000 UTC m=+46.466722246" watchObservedRunningTime="2026-04-23 23:15:22.264067709 +0000 UTC m=+46.473295731" Apr 23 23:15:23.244926 kubelet[2777]: I0423 23:15:23.244518 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:15:24.874163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295158042.mount: Deactivated successfully. Apr 23 23:15:25.207154 containerd[1534]: time="2026-04-23T23:15:25.206992911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:25.208525 containerd[1534]: time="2026-04-23T23:15:25.208355920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=48513326" Apr 23 23:15:25.209489 containerd[1534]: time="2026-04-23T23:15:25.209440207Z" level=info msg="ImageCreate event name:\"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:25.213047 containerd[1534]: time="2026-04-23T23:15:25.212981871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:25.214462 containerd[1534]: time="2026-04-23T23:15:25.214324480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"48513172\" in 3.507211844s" Apr 23 23:15:25.214462 containerd[1534]: time="2026-04-23T23:15:25.214359481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:f556d75d96fa1483cf593e71a7d71a551e78433f43c12badd65e95187cd0fced\"" Apr 23 23:15:25.216371 containerd[1534]: time="2026-04-23T23:15:25.215658529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 23 23:15:25.219645 containerd[1534]: time="2026-04-23T23:15:25.219596796Z" level=info msg="CreateContainer within sandbox \"77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 23 23:15:25.233327 containerd[1534]: time="2026-04-23T23:15:25.233272289Z" level=info msg="Container 95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:25.238251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667804290.mount: Deactivated successfully. Apr 23 23:15:25.246945 containerd[1534]: time="2026-04-23T23:15:25.246801701Z" level=info msg="CreateContainer within sandbox \"77bb2a8add5498f0b05b645b13d1b476c9ee73cfc45d6e7d023ed16e2bb787db\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66\"" Apr 23 23:15:25.247941 containerd[1534]: time="2026-04-23T23:15:25.247877148Z" level=info msg="StartContainer for \"95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66\"" Apr 23 23:15:25.251741 containerd[1534]: time="2026-04-23T23:15:25.251436093Z" level=info msg="connecting to shim 95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66" address="unix:///run/containerd/s/d2cd8f41cbed58abfe1ae787057de764695d6fa4be439ad69b944b602aaa3d06" protocol=ttrpc version=3 Apr 23 23:15:25.284426 systemd[1]: Started cri-containerd-95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66.scope - libcontainer container 95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66. Apr 23 23:15:25.333093 containerd[1534]: time="2026-04-23T23:15:25.333052247Z" level=info msg="StartContainer for \"95a8bf6a99247262593275ead4a27e627d9471ab649078f917e525e865f36c66\" returns successfully" Apr 23 23:15:26.296115 kubelet[2777]: I0423 23:15:26.295348 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-57885fdd4c-hmcnz" podStartSLOduration=22.285547618 podStartE2EDuration="33.295324939s" podCreationTimestamp="2026-04-23 23:14:53 +0000 UTC" firstStartedPulling="2026-04-23 23:15:14.205394045 +0000 UTC m=+38.414622027" lastFinishedPulling="2026-04-23 23:15:25.215171326 +0000 UTC m=+49.424399348" observedRunningTime="2026-04-23 23:15:26.293503166 +0000 UTC m=+50.502731228" watchObservedRunningTime="2026-04-23 23:15:26.295324939 +0000 UTC m=+50.504553001" Apr 23 23:15:26.932879 containerd[1534]: time="2026-04-23T23:15:26.932828936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5666744c8-g96t2,Uid:6d44be6e-c374-47b1-8ba7-226aa56ef63a,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:26.961349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886251800.mount: Deactivated successfully. Apr 23 23:15:26.990146 containerd[1534]: time="2026-04-23T23:15:26.989847722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:26.991511 containerd[1534]: time="2026-04-23T23:15:26.991442933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=15624823" Apr 23 23:15:26.993546 containerd[1534]: time="2026-04-23T23:15:26.993492507Z" level=info msg="ImageCreate event name:\"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:26.997039 containerd[1534]: time="2026-04-23T23:15:26.996417686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:26.999838 containerd[1534]: time="2026-04-23T23:15:26.999671668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"15624653\" in 1.782329127s" Apr 23 23:15:26.999838 containerd[1534]: time="2026-04-23T23:15:26.999722389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:b6ad9a1ad05ff3a8548f5adf860703add7bc41ef2f24f47e461f1914f73f7c8f\"" Apr 23 23:15:27.005597 containerd[1534]: time="2026-04-23T23:15:27.005560308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 23 23:15:27.011840 containerd[1534]: time="2026-04-23T23:15:27.011445228Z" level=info msg="CreateContainer within sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 23 23:15:27.027127 containerd[1534]: time="2026-04-23T23:15:27.027064813Z" level=info msg="Container 4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:27.050792 containerd[1534]: time="2026-04-23T23:15:27.049905968Z" level=info msg="CreateContainer within sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\"" Apr 23 23:15:27.054008 containerd[1534]: time="2026-04-23T23:15:27.053808034Z" level=info msg="StartContainer for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\"" Apr 23 23:15:27.057337 containerd[1534]: time="2026-04-23T23:15:27.055422285Z" level=info msg="connecting to shim 4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0" address="unix:///run/containerd/s/17da84dad486ab313f628ef66b6bd3c6ee88ce78d36ae0df818e28dec212fd2e" protocol=ttrpc version=3 Apr 23 23:15:27.098330 systemd[1]: Started cri-containerd-4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0.scope - libcontainer container 4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0. Apr 23 23:15:27.170383 systemd-networkd[1425]: cali3e69d703668: Link UP Apr 23 23:15:27.171472 systemd-networkd[1425]: cali3e69d703668: Gained carrier Apr 23 23:15:27.177736 containerd[1534]: time="2026-04-23T23:15:27.177531909Z" level=info msg="StartContainer for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" returns successfully" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.020 [INFO][4896] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0 calico-kube-controllers-5666744c8- calico-system 6d44be6e-c374-47b1-8ba7-226aa56ef63a 834 0 2026-04-23 23:14:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5666744c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b calico-kube-controllers-5666744c8-g96t2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3e69d703668 [] [] }} ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.020 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.080 [INFO][4912] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" HandleID="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.099 [INFO][4912] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" HandleID="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002efeb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"calico-kube-controllers-5666744c8-g96t2", "timestamp":"2026-04-23 23:15:27.080512694 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000188dc0)} Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.100 [INFO][4912] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.100 [INFO][4912] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.100 [INFO][4912] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.104 [INFO][4912] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.112 [INFO][4912] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.121 [INFO][4912] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.126 [INFO][4912] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.130 [INFO][4912] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.132 [INFO][4912] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.135 [INFO][4912] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.141 [INFO][4912] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.157 [INFO][4912] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.8/26] block=192.168.114.0/26 handle="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.157 [INFO][4912] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.8/26] handle="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:27.201280 containerd[1534]: 2026-04-23 23:15:27.157 [INFO][4912] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:27.205299 containerd[1534]: 2026-04-23 23:15:27.157 [INFO][4912] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.8/26] IPv6=[] ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" HandleID="k8s-pod-network.1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Workload="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.205299 containerd[1534]: 2026-04-23 23:15:27.163 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0", GenerateName:"calico-kube-controllers-5666744c8-", Namespace:"calico-system", SelfLink:"", UID:"6d44be6e-c374-47b1-8ba7-226aa56ef63a", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5666744c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"calico-kube-controllers-5666744c8-g96t2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e69d703668", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:27.205299 containerd[1534]: 2026-04-23 23:15:27.163 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.8/32] ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.205299 containerd[1534]: 2026-04-23 23:15:27.164 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e69d703668 ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.205299 containerd[1534]: 2026-04-23 23:15:27.173 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.205483 containerd[1534]: 2026-04-23 23:15:27.174 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0", GenerateName:"calico-kube-controllers-5666744c8-", Namespace:"calico-system", SelfLink:"", UID:"6d44be6e-c374-47b1-8ba7-226aa56ef63a", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 14, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5666744c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b", Pod:"calico-kube-controllers-5666744c8-g96t2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3e69d703668", MAC:"6a:b4:80:b2:58:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:27.205483 containerd[1534]: 2026-04-23 23:15:27.197 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" Namespace="calico-system" Pod="calico-kube-controllers-5666744c8-g96t2" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-calico--kube--controllers--5666744c8--g96t2-eth0" Apr 23 23:15:27.238469 containerd[1534]: time="2026-04-23T23:15:27.238391800Z" level=info msg="connecting to shim 1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b" address="unix:///run/containerd/s/dc915fad1941bc6e9afa995b6153ed27cdd641f04b89fcce3177710c8f1ed03a" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:27.268274 systemd[1]: Started cri-containerd-1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b.scope - libcontainer container 1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b. Apr 23 23:15:27.290671 containerd[1534]: time="2026-04-23T23:15:27.290598832Z" level=info msg="StopContainer for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" with timeout 30 (s)" Apr 23 23:15:27.291255 containerd[1534]: time="2026-04-23T23:15:27.291198756Z" level=info msg="StopContainer for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" with timeout 30 (s)" Apr 23 23:15:27.295122 containerd[1534]: time="2026-04-23T23:15:27.295018462Z" level=info msg="Stop container \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" with signal terminated" Apr 23 23:15:27.295571 containerd[1534]: time="2026-04-23T23:15:27.295333064Z" level=info msg="Stop container \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" with signal terminated" Apr 23 23:15:27.330093 systemd[1]: cri-containerd-4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0.scope: Deactivated successfully. Apr 23 23:15:27.343233 containerd[1534]: time="2026-04-23T23:15:27.343178267Z" level=info msg="received container exit event container_id:\"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" id:\"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" pid:4932 exit_status:2 exited_at:{seconds:1776986127 nanos:342185020}" Apr 23 23:15:27.343961 systemd[1]: cri-containerd-313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335.scope: Deactivated successfully. Apr 23 23:15:27.346816 containerd[1534]: time="2026-04-23T23:15:27.346746651Z" level=info msg="received container exit event container_id:\"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" id:\"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" pid:4665 exited_at:{seconds:1776986127 nanos:345405282}" Apr 23 23:15:27.423554 containerd[1534]: time="2026-04-23T23:15:27.423500929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5666744c8-g96t2,Uid:6d44be6e-c374-47b1-8ba7-226aa56ef63a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b\"" Apr 23 23:15:27.575633 containerd[1534]: time="2026-04-23T23:15:27.575585076Z" level=info msg="StopContainer for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" returns successfully" Apr 23 23:15:27.581001 containerd[1534]: time="2026-04-23T23:15:27.580948832Z" level=info msg="StopContainer for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" returns successfully" Apr 23 23:15:27.583643 containerd[1534]: time="2026-04-23T23:15:27.583585890Z" level=info msg="StopPodSandbox for \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\"" Apr 23 23:15:27.584343 containerd[1534]: time="2026-04-23T23:15:27.584280414Z" level=info msg="Container to stop \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:15:27.584343 containerd[1534]: time="2026-04-23T23:15:27.584309655Z" level=info msg="Container to stop \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 23 23:15:27.594013 systemd[1]: cri-containerd-7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820.scope: Deactivated successfully. Apr 23 23:15:27.597318 containerd[1534]: time="2026-04-23T23:15:27.597189142Z" level=info msg="received sandbox exit event container_id:\"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" id:\"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" exit_status:137 exited_at:{seconds:1776986127 nanos:596766419}" monitor_name=podsandbox Apr 23 23:15:27.630797 containerd[1534]: time="2026-04-23T23:15:27.630757408Z" level=info msg="shim disconnected" id=7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820 namespace=k8s.io Apr 23 23:15:27.636806 containerd[1534]: time="2026-04-23T23:15:27.630920849Z" level=warning msg="cleaning up after shim disconnected" id=7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820 namespace=k8s.io Apr 23 23:15:27.637112 containerd[1534]: time="2026-04-23T23:15:27.636788089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 23 23:15:27.685634 containerd[1534]: time="2026-04-23T23:15:27.684476531Z" level=info msg="received sandbox container exit event sandbox_id:\"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" exit_status:137 exited_at:{seconds:1776986127 nanos:596766419}" monitor_name=criService Apr 23 23:15:27.743878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0-rootfs.mount: Deactivated successfully. Apr 23 23:15:27.744143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335-rootfs.mount: Deactivated successfully. Apr 23 23:15:27.744243 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820-rootfs.mount: Deactivated successfully. Apr 23 23:15:27.744320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820-shm.mount: Deactivated successfully. Apr 23 23:15:27.766146 kubelet[2777]: I0423 23:15:27.765198 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-554fbbd85-5jspr" podStartSLOduration=17.790130489 podStartE2EDuration="30.765176115s" podCreationTimestamp="2026-04-23 23:14:57 +0000 UTC" firstStartedPulling="2026-04-23 23:15:14.02728678 +0000 UTC m=+38.236514762" lastFinishedPulling="2026-04-23 23:15:27.002332406 +0000 UTC m=+51.211560388" observedRunningTime="2026-04-23 23:15:27.319294346 +0000 UTC m=+51.528522368" watchObservedRunningTime="2026-04-23 23:15:27.765176115 +0000 UTC m=+51.974404137" Apr 23 23:15:27.769240 systemd-networkd[1425]: calic577ee08614: Link DOWN Apr 23 23:15:27.769254 systemd-networkd[1425]: calic577ee08614: Lost carrier Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.763 [INFO][5115] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.764 [INFO][5115] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" iface="eth0" netns="/var/run/netns/cni-0547c0c6-dfbe-00c0-0921-5302e8c25664" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.767 [INFO][5115] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" iface="eth0" netns="/var/run/netns/cni-0547c0c6-dfbe-00c0-0921-5302e8c25664" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.780 [INFO][5115] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" after=14.934581ms iface="eth0" netns="/var/run/netns/cni-0547c0c6-dfbe-00c0-0921-5302e8c25664" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.782 [INFO][5115] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.782 [INFO][5115] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.833 [INFO][5125] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.833 [INFO][5125] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.833 [INFO][5125] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.896 [INFO][5125] ipam/ipam_plugin.go 517: Released address using handleID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.896 [INFO][5125] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.898 [INFO][5125] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:27.903946 containerd[1534]: 2026-04-23 23:15:27.901 [INFO][5115] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:27.907612 systemd[1]: run-netns-cni\x2d0547c0c6\x2ddfbe\x2d00c0\x2d0921\x2d5302e8c25664.mount: Deactivated successfully. Apr 23 23:15:27.913524 containerd[1534]: time="2026-04-23T23:15:27.909276968Z" level=info msg="TearDown network for sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" successfully" Apr 23 23:15:27.913524 containerd[1534]: time="2026-04-23T23:15:27.909321848Z" level=info msg="StopPodSandbox for \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" returns successfully" Apr 23 23:15:28.055143 kubelet[2777]: I0423 23:15:28.055064 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-ca-bundle\") pod \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " Apr 23 23:15:28.055143 kubelet[2777]: I0423 23:15:28.055130 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-nginx-config\") pod \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " Apr 23 23:15:28.055440 kubelet[2777]: I0423 23:15:28.055186 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58szj\" (UniqueName: \"kubernetes.io/projected/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-kube-api-access-58szj\") pod \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " Apr 23 23:15:28.055440 kubelet[2777]: I0423 23:15:28.055227 2777 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-backend-key-pair\") pod \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\" (UID: \"dd1b9eb6-a9bc-42a8-8140-b36f5efd125a\") " Apr 23 23:15:28.068693 kubelet[2777]: I0423 23:15:28.068622 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a" (UID: "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:15:28.072130 kubelet[2777]: I0423 23:15:28.072075 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a" (UID: "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 23 23:15:28.072257 kubelet[2777]: I0423 23:15:28.072226 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a" (UID: "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 23 23:15:28.073631 systemd[1]: var-lib-kubelet-pods-dd1b9eb6\x2da9bc\x2d42a8\x2d8140\x2db36f5efd125a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 23 23:15:28.075311 kubelet[2777]: I0423 23:15:28.074165 2777 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-kube-api-access-58szj" (OuterVolumeSpecName: "kube-api-access-58szj") pod "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a" (UID: "dd1b9eb6-a9bc-42a8-8140-b36f5efd125a"). InnerVolumeSpecName "kube-api-access-58szj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 23 23:15:28.077535 systemd[1]: var-lib-kubelet-pods-dd1b9eb6\x2da9bc\x2d42a8\x2d8140\x2db36f5efd125a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d58szj.mount: Deactivated successfully. Apr 23 23:15:28.156613 kubelet[2777]: I0423 23:15:28.156088 2777 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-58szj\" (UniqueName: \"kubernetes.io/projected/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-kube-api-access-58szj\") on node \"ci-4459-2-4-n-a35467bd0b\" DevicePath \"\"" Apr 23 23:15:28.156613 kubelet[2777]: I0423 23:15:28.156138 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-backend-key-pair\") on node \"ci-4459-2-4-n-a35467bd0b\" DevicePath \"\"" Apr 23 23:15:28.156613 kubelet[2777]: I0423 23:15:28.156153 2777 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-whisker-ca-bundle\") on node \"ci-4459-2-4-n-a35467bd0b\" DevicePath \"\"" Apr 23 23:15:28.156613 kubelet[2777]: I0423 23:15:28.156164 2777 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a-nginx-config\") on node \"ci-4459-2-4-n-a35467bd0b\" DevicePath \"\"" Apr 23 23:15:28.292910 kubelet[2777]: I0423 23:15:28.292869 2777 scope.go:117] "RemoveContainer" containerID="4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0" Apr 23 23:15:28.300375 containerd[1534]: time="2026-04-23T23:15:28.300295361Z" level=info msg="RemoveContainer for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\"" Apr 23 23:15:28.305063 systemd[1]: Removed slice kubepods-besteffort-poddd1b9eb6_a9bc_42a8_8140_b36f5efd125a.slice - libcontainer container kubepods-besteffort-poddd1b9eb6_a9bc_42a8_8140_b36f5efd125a.slice. Apr 23 23:15:28.317665 containerd[1534]: time="2026-04-23T23:15:28.317619477Z" level=info msg="RemoveContainer for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" returns successfully" Apr 23 23:15:28.318349 kubelet[2777]: I0423 23:15:28.318309 2777 scope.go:117] "RemoveContainer" containerID="313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335" Apr 23 23:15:28.322209 containerd[1534]: time="2026-04-23T23:15:28.322148308Z" level=info msg="RemoveContainer for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\"" Apr 23 23:15:28.339182 systemd-networkd[1425]: cali3e69d703668: Gained IPv6LL Apr 23 23:15:28.341516 containerd[1534]: time="2026-04-23T23:15:28.341014875Z" level=info msg="RemoveContainer for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" returns successfully" Apr 23 23:15:28.346964 kubelet[2777]: I0423 23:15:28.346646 2777 scope.go:117] "RemoveContainer" containerID="4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0" Apr 23 23:15:28.349453 containerd[1534]: time="2026-04-23T23:15:28.348914768Z" level=error msg="ContainerStatus for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": not found" Apr 23 23:15:28.356905 kubelet[2777]: E0423 23:15:28.356556 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": not found" containerID="4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0" Apr 23 23:15:28.356905 kubelet[2777]: I0423 23:15:28.356654 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0"} err="failed to get container status \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": not found" Apr 23 23:15:28.356905 kubelet[2777]: I0423 23:15:28.356704 2777 scope.go:117] "RemoveContainer" containerID="313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335" Apr 23 23:15:28.358199 containerd[1534]: time="2026-04-23T23:15:28.357988949Z" level=error msg="ContainerStatus for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": not found" Apr 23 23:15:28.361469 kubelet[2777]: E0423 23:15:28.360418 2777 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": not found" containerID="313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335" Apr 23 23:15:28.361469 kubelet[2777]: I0423 23:15:28.361269 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335"} err="failed to get container status \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": rpc error: code = NotFound desc = an error occurred when try to find container \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": not found" Apr 23 23:15:28.361469 kubelet[2777]: I0423 23:15:28.361305 2777 scope.go:117] "RemoveContainer" containerID="4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0" Apr 23 23:15:28.363220 containerd[1534]: time="2026-04-23T23:15:28.362692461Z" level=error msg="ContainerStatus for \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": not found" Apr 23 23:15:28.363796 kubelet[2777]: I0423 23:15:28.363678 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0"} err="failed to get container status \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a686e6e07271e85537a18cd23ceef5d873639eb8ffc914c2a34b8c533f136a0\": not found" Apr 23 23:15:28.363796 kubelet[2777]: I0423 23:15:28.363794 2777 scope.go:117] "RemoveContainer" containerID="313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335" Apr 23 23:15:28.364705 containerd[1534]: time="2026-04-23T23:15:28.364647714Z" level=error msg="ContainerStatus for \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": not found" Apr 23 23:15:28.364980 kubelet[2777]: I0423 23:15:28.364838 2777 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335"} err="failed to get container status \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": rpc error: code = NotFound desc = an error occurred when try to find container \"313a0715a5c5b32960a53219a2104d449823a189431316bdaef0f3e0527be335\": not found" Apr 23 23:15:28.445842 systemd[1]: Created slice kubepods-besteffort-podb52b24c1_b346_4df6_bb7c_94587faa87e8.slice - libcontainer container kubepods-besteffort-podb52b24c1_b346_4df6_bb7c_94587faa87e8.slice. Apr 23 23:15:28.559751 kubelet[2777]: I0423 23:15:28.559699 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwb2\" (UniqueName: \"kubernetes.io/projected/b52b24c1-b346-4df6-bb7c-94587faa87e8-kube-api-access-hgwb2\") pod \"whisker-746c567d77-h2ffx\" (UID: \"b52b24c1-b346-4df6-bb7c-94587faa87e8\") " pod="calico-system/whisker-746c567d77-h2ffx" Apr 23 23:15:28.559751 kubelet[2777]: I0423 23:15:28.559759 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b52b24c1-b346-4df6-bb7c-94587faa87e8-whisker-backend-key-pair\") pod \"whisker-746c567d77-h2ffx\" (UID: \"b52b24c1-b346-4df6-bb7c-94587faa87e8\") " pod="calico-system/whisker-746c567d77-h2ffx" Apr 23 23:15:28.559923 kubelet[2777]: I0423 23:15:28.559786 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/b52b24c1-b346-4df6-bb7c-94587faa87e8-nginx-config\") pod \"whisker-746c567d77-h2ffx\" (UID: \"b52b24c1-b346-4df6-bb7c-94587faa87e8\") " pod="calico-system/whisker-746c567d77-h2ffx" Apr 23 23:15:28.559923 kubelet[2777]: I0423 23:15:28.559806 2777 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b52b24c1-b346-4df6-bb7c-94587faa87e8-whisker-ca-bundle\") pod \"whisker-746c567d77-h2ffx\" (UID: \"b52b24c1-b346-4df6-bb7c-94587faa87e8\") " pod="calico-system/whisker-746c567d77-h2ffx" Apr 23 23:15:28.753153 containerd[1534]: time="2026-04-23T23:15:28.752598644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-746c567d77-h2ffx,Uid:b52b24c1-b346-4df6-bb7c-94587faa87e8,Namespace:calico-system,Attempt:0,}" Apr 23 23:15:28.789204 containerd[1534]: time="2026-04-23T23:15:28.789150970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:28.790169 containerd[1534]: time="2026-04-23T23:15:28.790130617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=12456618" Apr 23 23:15:28.791652 containerd[1534]: time="2026-04-23T23:15:28.791565866Z" level=info msg="ImageCreate event name:\"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:28.796846 containerd[1534]: time="2026-04-23T23:15:28.796568900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:28.797913 containerd[1534]: time="2026-04-23T23:15:28.797462346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"15032209\" in 1.790945791s" Apr 23 23:15:28.797913 containerd[1534]: time="2026-04-23T23:15:28.797504466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:a127885d176e495b4edc6e0c0309c6570e4d776444937bfdc565fac5a13d8b3f\"" Apr 23 23:15:28.801504 containerd[1534]: time="2026-04-23T23:15:28.801448173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 23 23:15:28.805689 containerd[1534]: time="2026-04-23T23:15:28.805648561Z" level=info msg="CreateContainer within sandbox \"4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 23 23:15:28.824214 containerd[1534]: time="2026-04-23T23:15:28.822983078Z" level=info msg="Container 1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:28.841399 containerd[1534]: time="2026-04-23T23:15:28.841298481Z" level=info msg="CreateContainer within sandbox \"4452b7c9f6edba26f32d50bf3d31d73a90c4c919c06555b5e18f51ab39b5490c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216\"" Apr 23 23:15:28.844182 containerd[1534]: time="2026-04-23T23:15:28.844130100Z" level=info msg="StartContainer for \"1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216\"" Apr 23 23:15:28.849098 containerd[1534]: time="2026-04-23T23:15:28.848714411Z" level=info msg="connecting to shim 1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216" address="unix:///run/containerd/s/217c8a1946844a8903a8c1933333210f115393ca4877ba876e0660d41b33f1e6" protocol=ttrpc version=3 Apr 23 23:15:28.881645 systemd[1]: Started cri-containerd-1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216.scope - libcontainer container 1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216. Apr 23 23:15:28.973448 containerd[1534]: time="2026-04-23T23:15:28.973319009Z" level=info msg="StartContainer for \"1a357c721ea3b843e515657cb7254a836ed68c9b2c17d9fb00c4e63018338216\" returns successfully" Apr 23 23:15:28.985505 systemd-networkd[1425]: cali6346354b4a1: Link UP Apr 23 23:15:28.988540 systemd-networkd[1425]: cali6346354b4a1: Gained carrier Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.827 [INFO][5169] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0 whisker-746c567d77- calico-system b52b24c1-b346-4df6-bb7c-94587faa87e8 1030 0 2026-04-23 23:15:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:746c567d77 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-n-a35467bd0b whisker-746c567d77-h2ffx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6346354b4a1 [] [] }} ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.828 [INFO][5169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.899 [INFO][5182] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" HandleID="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.912 [INFO][5182] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" HandleID="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cf510), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-n-a35467bd0b", "pod":"whisker-746c567d77-h2ffx", "timestamp":"2026-04-23 23:15:28.899675314 +0000 UTC"}, Hostname:"ci-4459-2-4-n-a35467bd0b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010e000)} Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.912 [INFO][5182] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.912 [INFO][5182] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.912 [INFO][5182] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-n-a35467bd0b' Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.917 [INFO][5182] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.927 [INFO][5182] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.941 [INFO][5182] ipam/ipam.go 526: Trying affinity for 192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.948 [INFO][5182] ipam/ipam.go 160: Attempting to load block cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.953 [INFO][5182] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.114.0/26 host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.953 [INFO][5182] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.114.0/26 handle="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.957 [INFO][5182] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439 Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.965 [INFO][5182] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.114.0/26 handle="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.978 [INFO][5182] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.114.9/26] block=192.168.114.0/26 handle="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.979 [INFO][5182] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.114.9/26] handle="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" host="ci-4459-2-4-n-a35467bd0b" Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.979 [INFO][5182] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:29.009629 containerd[1534]: 2026-04-23 23:15:28.979 [INFO][5182] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.114.9/26] IPv6=[] ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" HandleID="k8s-pod-network.16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.010838 containerd[1534]: 2026-04-23 23:15:28.981 [INFO][5169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0", GenerateName:"whisker-746c567d77-", Namespace:"calico-system", SelfLink:"", UID:"b52b24c1-b346-4df6-bb7c-94587faa87e8", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"746c567d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"", Pod:"whisker-746c567d77-h2ffx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6346354b4a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:29.010838 containerd[1534]: 2026-04-23 23:15:28.982 [INFO][5169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.114.9/32] ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.010838 containerd[1534]: 2026-04-23 23:15:28.982 [INFO][5169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6346354b4a1 ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.010838 containerd[1534]: 2026-04-23 23:15:28.985 [INFO][5169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.010838 containerd[1534]: 2026-04-23 23:15:28.989 [INFO][5169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0", GenerateName:"whisker-746c567d77-", Namespace:"calico-system", SelfLink:"", UID:"b52b24c1-b346-4df6-bb7c-94587faa87e8", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 23, 23, 15, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"746c567d77", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-n-a35467bd0b", ContainerID:"16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439", Pod:"whisker-746c567d77-h2ffx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.114.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6346354b4a1", MAC:"f2:c5:b8:11:33:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 23 23:15:29.010838 containerd[1534]: 2026-04-23 23:15:29.006 [INFO][5169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" Namespace="calico-system" Pod="whisker-746c567d77-h2ffx" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--746c567d77--h2ffx-eth0" Apr 23 23:15:29.043926 containerd[1534]: time="2026-04-23T23:15:29.043815323Z" level=info msg="connecting to shim 16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439" address="unix:///run/containerd/s/a3240d79fcd85e3a2cb707fd8b8bacb6634222b3efc6037d270ec03f8dc56dba" namespace=k8s.io protocol=ttrpc version=3 Apr 23 23:15:29.047304 kubelet[2777]: I0423 23:15:29.047266 2777 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 23 23:15:29.049044 kubelet[2777]: I0423 23:15:29.047797 2777 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 23 23:15:29.084702 systemd[1]: Started cri-containerd-16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439.scope - libcontainer container 16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439. Apr 23 23:15:29.136317 containerd[1534]: time="2026-04-23T23:15:29.136248463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-746c567d77-h2ffx,Uid:b52b24c1-b346-4df6-bb7c-94587faa87e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439\"" Apr 23 23:15:29.143870 containerd[1534]: time="2026-04-23T23:15:29.143182749Z" level=info msg="CreateContainer within sandbox \"16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 23 23:15:29.154210 containerd[1534]: time="2026-04-23T23:15:29.154147663Z" level=info msg="Container 6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:29.165337 containerd[1534]: time="2026-04-23T23:15:29.165236017Z" level=info msg="CreateContainer within sandbox \"16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba\"" Apr 23 23:15:29.166482 containerd[1534]: time="2026-04-23T23:15:29.166420185Z" level=info msg="StartContainer for \"6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba\"" Apr 23 23:15:29.169752 containerd[1534]: time="2026-04-23T23:15:29.169695087Z" level=info msg="connecting to shim 6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba" address="unix:///run/containerd/s/a3240d79fcd85e3a2cb707fd8b8bacb6634222b3efc6037d270ec03f8dc56dba" protocol=ttrpc version=3 Apr 23 23:15:29.191276 systemd[1]: Started cri-containerd-6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba.scope - libcontainer container 6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba. Apr 23 23:15:29.241722 containerd[1534]: time="2026-04-23T23:15:29.241681570Z" level=info msg="StartContainer for \"6e1b67f3729eac2d6c945d12aeb73f361c259b02c802b3355acfe8559601c2ba\" returns successfully" Apr 23 23:15:29.253409 containerd[1534]: time="2026-04-23T23:15:29.253306768Z" level=info msg="CreateContainer within sandbox \"16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 23 23:15:29.266045 containerd[1534]: time="2026-04-23T23:15:29.265899052Z" level=info msg="Container f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:29.292145 containerd[1534]: time="2026-04-23T23:15:29.292047828Z" level=info msg="CreateContainer within sandbox \"16ec092293b082eef03c068ec4bd6efecd63c46c0144890a9033d1befd01f439\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb\"" Apr 23 23:15:29.294581 containerd[1534]: time="2026-04-23T23:15:29.293421877Z" level=info msg="StartContainer for \"f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb\"" Apr 23 23:15:29.297292 containerd[1534]: time="2026-04-23T23:15:29.297188862Z" level=info msg="connecting to shim f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb" address="unix:///run/containerd/s/a3240d79fcd85e3a2cb707fd8b8bacb6634222b3efc6037d270ec03f8dc56dba" protocol=ttrpc version=3 Apr 23 23:15:29.335539 systemd[1]: Started cri-containerd-f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb.scope - libcontainer container f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb. Apr 23 23:15:29.339243 kubelet[2777]: I0423 23:15:29.338980 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w65pl" podStartSLOduration=19.642710342 podStartE2EDuration="34.338956982s" podCreationTimestamp="2026-04-23 23:14:55 +0000 UTC" firstStartedPulling="2026-04-23 23:15:14.102276513 +0000 UTC m=+38.311504535" lastFinishedPulling="2026-04-23 23:15:28.798523153 +0000 UTC m=+53.007751175" observedRunningTime="2026-04-23 23:15:29.337607573 +0000 UTC m=+53.546835595" watchObservedRunningTime="2026-04-23 23:15:29.338956982 +0000 UTC m=+53.548185044" Apr 23 23:15:29.413062 containerd[1534]: time="2026-04-23T23:15:29.412714477Z" level=info msg="StartContainer for \"f5d0c5f790be5eaead92ac1dd8bdc726f94041780ebb1054645271ef5ea11abb\" returns successfully" Apr 23 23:15:29.942055 kubelet[2777]: I0423 23:15:29.941885 2777 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1b9eb6-a9bc-42a8-8140-b36f5efd125a" path="/var/lib/kubelet/pods/dd1b9eb6-a9bc-42a8-8140-b36f5efd125a/volumes" Apr 23 23:15:30.345552 kubelet[2777]: I0423 23:15:30.345344 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-746c567d77-h2ffx" podStartSLOduration=2.345326766 podStartE2EDuration="2.345326766s" podCreationTimestamp="2026-04-23 23:15:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-23 23:15:30.345260085 +0000 UTC m=+54.554488067" watchObservedRunningTime="2026-04-23 23:15:30.345326766 +0000 UTC m=+54.554554788" Apr 23 23:15:30.770218 systemd-networkd[1425]: cali6346354b4a1: Gained IPv6LL Apr 23 23:15:32.108803 containerd[1534]: time="2026-04-23T23:15:32.108701854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:32.111029 containerd[1534]: time="2026-04-23T23:15:32.110937029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=46169343" Apr 23 23:15:32.113299 containerd[1534]: time="2026-04-23T23:15:32.113227084Z" level=info msg="ImageCreate event name:\"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:32.117211 containerd[1534]: time="2026-04-23T23:15:32.117096670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 23 23:15:32.118536 containerd[1534]: time="2026-04-23T23:15:32.118459439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"48744950\" in 3.316961426s" Apr 23 23:15:32.118536 containerd[1534]: time="2026-04-23T23:15:32.118502399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:f3ba40f705afacb15a8a2f5b02c08a912321f045220eb8f8f1f5ca51f129741a\"" Apr 23 23:15:32.137882 containerd[1534]: time="2026-04-23T23:15:32.137844128Z" level=info msg="CreateContainer within sandbox \"1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 23 23:15:32.145540 containerd[1534]: time="2026-04-23T23:15:32.145494339Z" level=info msg="Container e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:15:32.154559 containerd[1534]: time="2026-04-23T23:15:32.154484719Z" level=info msg="CreateContainer within sandbox \"1e011bfe5cc4afc2ad9211010aec1cbb3d0c2d31aeeedfb3817854859e10bd2b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29\"" Apr 23 23:15:32.155723 containerd[1534]: time="2026-04-23T23:15:32.155482165Z" level=info msg="StartContainer for \"e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29\"" Apr 23 23:15:32.158407 containerd[1534]: time="2026-04-23T23:15:32.158284904Z" level=info msg="connecting to shim e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29" address="unix:///run/containerd/s/dc915fad1941bc6e9afa995b6153ed27cdd641f04b89fcce3177710c8f1ed03a" protocol=ttrpc version=3 Apr 23 23:15:32.188279 systemd[1]: Started cri-containerd-e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29.scope - libcontainer container e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29. Apr 23 23:15:32.270057 containerd[1534]: time="2026-04-23T23:15:32.270009367Z" level=info msg="StartContainer for \"e80051a4444a7ba21eba5e32b27b0e2f82ce2461c2714e99627a46226ff88b29\" returns successfully" Apr 23 23:15:32.625927 kubelet[2777]: I0423 23:15:32.625470 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:15:32.666099 kubelet[2777]: I0423 23:15:32.665675 2777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5666744c8-g96t2" podStartSLOduration=32.972366099 podStartE2EDuration="37.665654918s" podCreationTimestamp="2026-04-23 23:14:55 +0000 UTC" firstStartedPulling="2026-04-23 23:15:27.42665327 +0000 UTC m=+51.635881292" lastFinishedPulling="2026-04-23 23:15:32.119942009 +0000 UTC m=+56.329170111" observedRunningTime="2026-04-23 23:15:32.365693243 +0000 UTC m=+56.574921385" watchObservedRunningTime="2026-04-23 23:15:32.665654918 +0000 UTC m=+56.874882940" Apr 23 23:15:35.943016 containerd[1534]: time="2026-04-23T23:15:35.942977764Z" level=info msg="StopPodSandbox for \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\"" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:35.990 [WARNING][5447] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:35.990 [INFO][5447] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:35.990 [INFO][5447] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" iface="eth0" netns="" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:35.990 [INFO][5447] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:35.990 [INFO][5447] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.021 [INFO][5455] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.021 [INFO][5455] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.021 [INFO][5455] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.033 [WARNING][5455] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.033 [INFO][5455] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.036 [INFO][5455] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:36.045209 containerd[1534]: 2026-04-23 23:15:36.041 [INFO][5447] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.045209 containerd[1534]: time="2026-04-23T23:15:36.045159237Z" level=info msg="TearDown network for sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" successfully" Apr 23 23:15:36.045209 containerd[1534]: time="2026-04-23T23:15:36.045179797Z" level=info msg="StopPodSandbox for \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" returns successfully" Apr 23 23:15:36.046319 containerd[1534]: time="2026-04-23T23:15:36.046276565Z" level=info msg="RemovePodSandbox for \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\"" Apr 23 23:15:36.046682 containerd[1534]: time="2026-04-23T23:15:36.046471286Z" level=info msg="Forcibly stopping sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\"" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.099 [WARNING][5469] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" WorkloadEndpoint="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.100 [INFO][5469] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.100 [INFO][5469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" iface="eth0" netns="" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.100 [INFO][5469] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.100 [INFO][5469] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.121 [INFO][5476] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.121 [INFO][5476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.121 [INFO][5476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.135 [WARNING][5476] ipam/ipam_plugin.go 515: Asked to release address but it doesn't exist. Ignoring ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.135 [INFO][5476] ipam/ipam_plugin.go 526: Releasing address using workloadID ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" HandleID="k8s-pod-network.7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Workload="ci--4459--2--4--n--a35467bd0b-k8s-whisker--554fbbd85--5jspr-eth0" Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.138 [INFO][5476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 23 23:15:36.141710 containerd[1534]: 2026-04-23 23:15:36.139 [INFO][5469] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820" Apr 23 23:15:36.142525 containerd[1534]: time="2026-04-23T23:15:36.142002395Z" level=info msg="TearDown network for sandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" successfully" Apr 23 23:15:36.145672 containerd[1534]: time="2026-04-23T23:15:36.145359857Z" level=info msg="Ensure that sandbox 7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820 in task-service has been cleanup successfully" Apr 23 23:15:36.150907 containerd[1534]: time="2026-04-23T23:15:36.150859853Z" level=info msg="RemovePodSandbox \"7782de0bb4a070a01db376c35f25b5ae17a78c024bbda22672b8031038142820\" returns successfully" Apr 23 23:15:36.430658 kubelet[2777]: I0423 23:15:36.430539 2777 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 23 23:17:00.075359 systemd[1]: Started sshd@7-49.13.208.85:22-50.85.169.122:53778.service - OpenSSH per-connection server daemon (50.85.169.122:53778). Apr 23 23:17:00.221792 sshd[5779]: Accepted publickey for core from 50.85.169.122 port 53778 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:00.225577 sshd-session[5779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:00.235720 systemd-logind[1504]: New session 8 of user core. Apr 23 23:17:00.241453 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 23 23:17:00.381723 sshd[5782]: Connection closed by 50.85.169.122 port 53778 Apr 23 23:17:00.382577 sshd-session[5779]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:00.390242 systemd[1]: sshd@7-49.13.208.85:22-50.85.169.122:53778.service: Deactivated successfully. Apr 23 23:17:00.394588 systemd[1]: session-8.scope: Deactivated successfully. Apr 23 23:17:00.396610 systemd-logind[1504]: Session 8 logged out. Waiting for processes to exit. Apr 23 23:17:00.398804 systemd-logind[1504]: Removed session 8. Apr 23 23:17:05.415422 systemd[1]: Started sshd@8-49.13.208.85:22-50.85.169.122:53790.service - OpenSSH per-connection server daemon (50.85.169.122:53790). Apr 23 23:17:05.557913 sshd[5818]: Accepted publickey for core from 50.85.169.122 port 53790 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:05.560539 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:05.566393 systemd-logind[1504]: New session 9 of user core. Apr 23 23:17:05.573529 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 23 23:17:05.700686 sshd[5821]: Connection closed by 50.85.169.122 port 53790 Apr 23 23:17:05.701548 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:05.707933 systemd[1]: sshd@8-49.13.208.85:22-50.85.169.122:53790.service: Deactivated successfully. Apr 23 23:17:05.717113 systemd[1]: session-9.scope: Deactivated successfully. Apr 23 23:17:05.722607 systemd-logind[1504]: Session 9 logged out. Waiting for processes to exit. Apr 23 23:17:05.725976 systemd-logind[1504]: Removed session 9. Apr 23 23:17:10.734333 systemd[1]: Started sshd@9-49.13.208.85:22-50.85.169.122:39074.service - OpenSSH per-connection server daemon (50.85.169.122:39074). Apr 23 23:17:10.876564 sshd[5834]: Accepted publickey for core from 50.85.169.122 port 39074 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:10.878803 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:10.886549 systemd-logind[1504]: New session 10 of user core. Apr 23 23:17:10.893489 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 23 23:17:11.015189 sshd[5837]: Connection closed by 50.85.169.122 port 39074 Apr 23 23:17:11.016154 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:11.022093 systemd-logind[1504]: Session 10 logged out. Waiting for processes to exit. Apr 23 23:17:11.022447 systemd[1]: sshd@9-49.13.208.85:22-50.85.169.122:39074.service: Deactivated successfully. Apr 23 23:17:11.025292 systemd[1]: session-10.scope: Deactivated successfully. Apr 23 23:17:11.028082 systemd-logind[1504]: Removed session 10. Apr 23 23:17:16.048685 systemd[1]: Started sshd@10-49.13.208.85:22-50.85.169.122:39084.service - OpenSSH per-connection server daemon (50.85.169.122:39084). Apr 23 23:17:16.190074 sshd[5881]: Accepted publickey for core from 50.85.169.122 port 39084 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:16.192137 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:16.199365 systemd-logind[1504]: New session 11 of user core. Apr 23 23:17:16.207462 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 23 23:17:16.345236 sshd[5903]: Connection closed by 50.85.169.122 port 39084 Apr 23 23:17:16.346710 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:16.352788 systemd[1]: sshd@10-49.13.208.85:22-50.85.169.122:39084.service: Deactivated successfully. Apr 23 23:17:16.357274 systemd[1]: session-11.scope: Deactivated successfully. Apr 23 23:17:16.360326 systemd-logind[1504]: Session 11 logged out. Waiting for processes to exit. Apr 23 23:17:16.375585 systemd-logind[1504]: Removed session 11. Apr 23 23:17:16.376247 systemd[1]: Started sshd@11-49.13.208.85:22-50.85.169.122:39098.service - OpenSSH per-connection server daemon (50.85.169.122:39098). Apr 23 23:17:16.510640 sshd[5920]: Accepted publickey for core from 50.85.169.122 port 39098 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:16.512860 sshd-session[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:16.519014 systemd-logind[1504]: New session 12 of user core. Apr 23 23:17:16.523250 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 23 23:17:16.700331 sshd[5923]: Connection closed by 50.85.169.122 port 39098 Apr 23 23:17:16.699750 sshd-session[5920]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:16.707221 systemd[1]: sshd@11-49.13.208.85:22-50.85.169.122:39098.service: Deactivated successfully. Apr 23 23:17:16.713162 systemd[1]: session-12.scope: Deactivated successfully. Apr 23 23:17:16.714570 systemd-logind[1504]: Session 12 logged out. Waiting for processes to exit. Apr 23 23:17:16.732183 systemd[1]: Started sshd@12-49.13.208.85:22-50.85.169.122:39102.service - OpenSSH per-connection server daemon (50.85.169.122:39102). Apr 23 23:17:16.734230 systemd-logind[1504]: Removed session 12. Apr 23 23:17:16.871256 sshd[5932]: Accepted publickey for core from 50.85.169.122 port 39102 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:16.874730 sshd-session[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:16.882981 systemd-logind[1504]: New session 13 of user core. Apr 23 23:17:16.893276 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 23 23:17:17.025674 sshd[5935]: Connection closed by 50.85.169.122 port 39102 Apr 23 23:17:17.026579 sshd-session[5932]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:17.032773 systemd-logind[1504]: Session 13 logged out. Waiting for processes to exit. Apr 23 23:17:17.032902 systemd[1]: sshd@12-49.13.208.85:22-50.85.169.122:39102.service: Deactivated successfully. Apr 23 23:17:17.037164 systemd[1]: session-13.scope: Deactivated successfully. Apr 23 23:17:17.040783 systemd-logind[1504]: Removed session 13. Apr 23 23:17:22.055861 systemd[1]: Started sshd@13-49.13.208.85:22-50.85.169.122:57006.service - OpenSSH per-connection server daemon (50.85.169.122:57006). Apr 23 23:17:22.191445 sshd[5947]: Accepted publickey for core from 50.85.169.122 port 57006 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:22.194491 sshd-session[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:22.200119 systemd-logind[1504]: New session 14 of user core. Apr 23 23:17:22.208408 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 23 23:17:22.322698 sshd[5950]: Connection closed by 50.85.169.122 port 57006 Apr 23 23:17:22.323957 sshd-session[5947]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:22.329102 systemd[1]: sshd@13-49.13.208.85:22-50.85.169.122:57006.service: Deactivated successfully. Apr 23 23:17:22.333224 systemd[1]: session-14.scope: Deactivated successfully. Apr 23 23:17:22.336054 systemd-logind[1504]: Session 14 logged out. Waiting for processes to exit. Apr 23 23:17:22.337398 systemd-logind[1504]: Removed session 14. Apr 23 23:17:22.360463 systemd[1]: Started sshd@14-49.13.208.85:22-50.85.169.122:57020.service - OpenSSH per-connection server daemon (50.85.169.122:57020). Apr 23 23:17:22.495079 sshd[5961]: Accepted publickey for core from 50.85.169.122 port 57020 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:22.496660 sshd-session[5961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:22.503284 systemd-logind[1504]: New session 15 of user core. Apr 23 23:17:22.511311 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 23 23:17:22.811739 sshd[5964]: Connection closed by 50.85.169.122 port 57020 Apr 23 23:17:22.814647 sshd-session[5961]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:22.820674 systemd[1]: sshd@14-49.13.208.85:22-50.85.169.122:57020.service: Deactivated successfully. Apr 23 23:17:22.824125 systemd[1]: session-15.scope: Deactivated successfully. Apr 23 23:17:22.827131 systemd-logind[1504]: Session 15 logged out. Waiting for processes to exit. Apr 23 23:17:22.840337 systemd[1]: Started sshd@15-49.13.208.85:22-50.85.169.122:57030.service - OpenSSH per-connection server daemon (50.85.169.122:57030). Apr 23 23:17:22.841271 systemd-logind[1504]: Removed session 15. Apr 23 23:17:22.978955 sshd[5974]: Accepted publickey for core from 50.85.169.122 port 57030 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:22.982631 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:22.990653 systemd-logind[1504]: New session 16 of user core. Apr 23 23:17:22.997460 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 23 23:17:23.797207 sshd[5977]: Connection closed by 50.85.169.122 port 57030 Apr 23 23:17:23.796363 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:23.804744 systemd[1]: sshd@15-49.13.208.85:22-50.85.169.122:57030.service: Deactivated successfully. Apr 23 23:17:23.807910 systemd[1]: session-16.scope: Deactivated successfully. Apr 23 23:17:23.812592 systemd-logind[1504]: Session 16 logged out. Waiting for processes to exit. Apr 23 23:17:23.830497 systemd[1]: Started sshd@16-49.13.208.85:22-50.85.169.122:57046.service - OpenSSH per-connection server daemon (50.85.169.122:57046). Apr 23 23:17:23.833694 systemd-logind[1504]: Removed session 16. Apr 23 23:17:23.974233 sshd[5995]: Accepted publickey for core from 50.85.169.122 port 57046 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:23.976997 sshd-session[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:23.983559 systemd-logind[1504]: New session 17 of user core. Apr 23 23:17:23.991358 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 23 23:17:24.248122 sshd[5999]: Connection closed by 50.85.169.122 port 57046 Apr 23 23:17:24.250660 sshd-session[5995]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:24.258293 systemd[1]: sshd@16-49.13.208.85:22-50.85.169.122:57046.service: Deactivated successfully. Apr 23 23:17:24.262452 systemd[1]: session-17.scope: Deactivated successfully. Apr 23 23:17:24.266569 systemd-logind[1504]: Session 17 logged out. Waiting for processes to exit. Apr 23 23:17:24.279885 systemd[1]: Started sshd@17-49.13.208.85:22-50.85.169.122:57048.service - OpenSSH per-connection server daemon (50.85.169.122:57048). Apr 23 23:17:24.282713 systemd-logind[1504]: Removed session 17. Apr 23 23:17:24.411855 sshd[6009]: Accepted publickey for core from 50.85.169.122 port 57048 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:24.414301 sshd-session[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:24.419559 systemd-logind[1504]: New session 18 of user core. Apr 23 23:17:24.423255 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 23 23:17:24.540077 sshd[6012]: Connection closed by 50.85.169.122 port 57048 Apr 23 23:17:24.540397 sshd-session[6009]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:24.548306 systemd-logind[1504]: Session 18 logged out. Waiting for processes to exit. Apr 23 23:17:24.549098 systemd[1]: sshd@17-49.13.208.85:22-50.85.169.122:57048.service: Deactivated successfully. Apr 23 23:17:24.551702 systemd[1]: session-18.scope: Deactivated successfully. Apr 23 23:17:24.554713 systemd-logind[1504]: Removed session 18. Apr 23 23:17:29.571270 systemd[1]: Started sshd@18-49.13.208.85:22-50.85.169.122:58984.service - OpenSSH per-connection server daemon (50.85.169.122:58984). Apr 23 23:17:29.708616 sshd[6071]: Accepted publickey for core from 50.85.169.122 port 58984 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:29.713421 sshd-session[6071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:29.718423 systemd-logind[1504]: New session 19 of user core. Apr 23 23:17:29.726369 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 23 23:17:29.854722 sshd[6074]: Connection closed by 50.85.169.122 port 58984 Apr 23 23:17:29.855589 sshd-session[6071]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:29.862144 systemd[1]: sshd@18-49.13.208.85:22-50.85.169.122:58984.service: Deactivated successfully. Apr 23 23:17:29.866274 systemd[1]: session-19.scope: Deactivated successfully. Apr 23 23:17:29.869571 systemd-logind[1504]: Session 19 logged out. Waiting for processes to exit. Apr 23 23:17:29.871804 systemd-logind[1504]: Removed session 19. Apr 23 23:17:34.884949 systemd[1]: Started sshd@19-49.13.208.85:22-50.85.169.122:58990.service - OpenSSH per-connection server daemon (50.85.169.122:58990). Apr 23 23:17:35.020557 sshd[6110]: Accepted publickey for core from 50.85.169.122 port 58990 ssh2: RSA SHA256:Tz0dqMPsdf8xUb4jUaTJqqr7RT+Ihh1eVJlUIJQ/qIM Apr 23 23:17:35.022322 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 23 23:17:35.028830 systemd-logind[1504]: New session 20 of user core. Apr 23 23:17:35.033310 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 23 23:17:35.162314 sshd[6113]: Connection closed by 50.85.169.122 port 58990 Apr 23 23:17:35.163641 sshd-session[6110]: pam_unix(sshd:session): session closed for user core Apr 23 23:17:35.169011 systemd-logind[1504]: Session 20 logged out. Waiting for processes to exit. Apr 23 23:17:35.169230 systemd[1]: sshd@19-49.13.208.85:22-50.85.169.122:58990.service: Deactivated successfully. Apr 23 23:17:35.174357 systemd[1]: session-20.scope: Deactivated successfully. Apr 23 23:17:35.180273 systemd-logind[1504]: Removed session 20. Apr 23 23:17:49.659559 kubelet[2777]: E0423 23:17:49.659482 2777 controller.go:195] "Failed to update lease" err="Put \"https://49.13.208.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-n-a35467bd0b?timeout=10s\": context deadline exceeded" Apr 23 23:17:49.733875 systemd[1]: cri-containerd-5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e.scope: Deactivated successfully. Apr 23 23:17:49.734626 systemd[1]: cri-containerd-5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e.scope: Consumed 3.624s CPU time, 63.6M memory peak, 2.7M read from disk. Apr 23 23:17:49.737204 containerd[1534]: time="2026-04-23T23:17:49.736305468Z" level=info msg="received container exit event container_id:\"5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e\" id:\"5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e\" pid:2618 exit_status:1 exited_at:{seconds:1776986269 nanos:734685299}" Apr 23 23:17:49.769215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e-rootfs.mount: Deactivated successfully. Apr 23 23:17:49.845884 kubelet[2777]: E0423 23:17:49.845815 2777 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51764->10.0.0.2:2379: read: connection timed out" Apr 23 23:17:49.851971 systemd[1]: cri-containerd-edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd.scope: Deactivated successfully. Apr 23 23:17:49.852294 systemd[1]: cri-containerd-edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd.scope: Consumed 3.142s CPU time, 27.2M memory peak, 4.1M read from disk. Apr 23 23:17:49.858303 containerd[1534]: time="2026-04-23T23:17:49.858115008Z" level=info msg="received container exit event container_id:\"edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd\" id:\"edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd\" pid:2631 exit_status:1 exited_at:{seconds:1776986269 nanos:855823074}" Apr 23 23:17:49.895979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd-rootfs.mount: Deactivated successfully. Apr 23 23:17:50.375244 systemd[1]: cri-containerd-8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b.scope: Deactivated successfully. Apr 23 23:17:50.376163 systemd[1]: cri-containerd-8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b.scope: Consumed 13.757s CPU time, 123M memory peak, 3.7M read from disk. Apr 23 23:17:50.379345 containerd[1534]: time="2026-04-23T23:17:50.379203972Z" level=info msg="received container exit event container_id:\"8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b\" id:\"8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b\" pid:3097 exit_status:1 exited_at:{seconds:1776986270 nanos:377632682}" Apr 23 23:17:50.405562 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b-rootfs.mount: Deactivated successfully. Apr 23 23:17:50.787303 kubelet[2777]: I0423 23:17:50.787235 2777 scope.go:117] "RemoveContainer" containerID="edd8b2c8fb3818d57187e5c4a24e7c1e886cfeec77dd8f173150b79baf5db5cd" Apr 23 23:17:50.793415 kubelet[2777]: I0423 23:17:50.793381 2777 scope.go:117] "RemoveContainer" containerID="5bd79c85bd41bb63bf2eccca67cdc61cc1044a8661b24099db1cc206ef0b6e5e" Apr 23 23:17:50.801392 kubelet[2777]: I0423 23:17:50.801283 2777 scope.go:117] "RemoveContainer" containerID="8357d75073bda83577d23e12059aa9b2e6ade75413381d83abc04d272ba53a5b" Apr 23 23:17:50.801803 containerd[1534]: time="2026-04-23T23:17:50.801757737Z" level=info msg="CreateContainer within sandbox \"2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 23 23:17:50.803530 containerd[1534]: time="2026-04-23T23:17:50.802852864Z" level=info msg="CreateContainer within sandbox \"efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 23 23:17:50.805741 containerd[1534]: time="2026-04-23T23:17:50.805710641Z" level=info msg="CreateContainer within sandbox \"29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 23 23:17:50.820080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733497798.mount: Deactivated successfully. Apr 23 23:17:50.826057 containerd[1534]: time="2026-04-23T23:17:50.824299554Z" level=info msg="Container e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:50.831399 containerd[1534]: time="2026-04-23T23:17:50.831292356Z" level=info msg="Container 33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:50.832692 containerd[1534]: time="2026-04-23T23:17:50.832665405Z" level=info msg="Container 716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5: CDI devices from CRI Config.CDIDevices: []" Apr 23 23:17:50.846471 containerd[1534]: time="2026-04-23T23:17:50.846412648Z" level=info msg="CreateContainer within sandbox \"2dc531ed6840fc7b0fcc8eda9f7b44cae50df7dbdfd96773cde58d5a8bd89d88\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009\"" Apr 23 23:17:50.846715 containerd[1534]: time="2026-04-23T23:17:50.846553529Z" level=info msg="CreateContainer within sandbox \"efd7099e803c36b528ad27448605145fc7c9657cdc79255afb7457c37d06c5f0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35\"" Apr 23 23:17:50.847755 containerd[1534]: time="2026-04-23T23:17:50.847695896Z" level=info msg="StartContainer for \"33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009\"" Apr 23 23:17:50.848244 containerd[1534]: time="2026-04-23T23:17:50.848177899Z" level=info msg="StartContainer for \"e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35\"" Apr 23 23:17:50.849447 containerd[1534]: time="2026-04-23T23:17:50.849396346Z" level=info msg="connecting to shim e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35" address="unix:///run/containerd/s/cc83f0076ca31c456e013418d1de8266642a63d941fa19ac965fe59de2fc1ceb" protocol=ttrpc version=3 Apr 23 23:17:50.850005 containerd[1534]: time="2026-04-23T23:17:50.849852349Z" level=info msg="CreateContainer within sandbox \"29871d3addcdc676f15bf253045f4c73f737355e556cac542a8342a014101ae9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5\"" Apr 23 23:17:50.851712 containerd[1534]: time="2026-04-23T23:17:50.851683720Z" level=info msg="StartContainer for \"716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5\"" Apr 23 23:17:50.853160 containerd[1534]: time="2026-04-23T23:17:50.853098089Z" level=info msg="connecting to shim 716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5" address="unix:///run/containerd/s/701e9a73a6dda6aa5ea03d8ebd50c879cc3d292584b30da2b3b6a11ae33e82be" protocol=ttrpc version=3 Apr 23 23:17:50.853408 containerd[1534]: time="2026-04-23T23:17:50.853097889Z" level=info msg="connecting to shim 33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009" address="unix:///run/containerd/s/92f7dd97be91b31ad6df31f221ffefabeb3666adf2135ebcd4b3f7f861f3d4f4" protocol=ttrpc version=3 Apr 23 23:17:50.889274 systemd[1]: Started cri-containerd-33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009.scope - libcontainer container 33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009. Apr 23 23:17:50.890645 systemd[1]: Started cri-containerd-716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5.scope - libcontainer container 716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5. Apr 23 23:17:50.894514 systemd[1]: Started cri-containerd-e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35.scope - libcontainer container e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35. Apr 23 23:17:50.977785 containerd[1534]: time="2026-04-23T23:17:50.976990921Z" level=info msg="StartContainer for \"e9536a3b18c478bc7009ea47f6d65d07928d8f0d1914cab62209ada7c1b94d35\" returns successfully" Apr 23 23:17:50.978971 containerd[1534]: time="2026-04-23T23:17:50.978845412Z" level=info msg="StartContainer for \"33c057edac33ae44ab40a1d369d50ea1c321329c058e9981df43843ab0ce6009\" returns successfully" Apr 23 23:17:50.987789 containerd[1534]: time="2026-04-23T23:17:50.987743666Z" level=info msg="StartContainer for \"716b62d75494877231ec2be1a2baee56cabfd0e99e550e48d187d4dd66f0e1d5\" returns successfully" Apr 23 23:17:51.821434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168981721.mount: Deactivated successfully. Apr 23 23:17:54.266819 kubelet[2777]: E0423 23:17:54.260565 2777 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51628->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-2-4-n-a35467bd0b.18a91f9791c994a5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-2-4-n-a35467bd0b,UID:17d4e089f1e05e40158787aaa9ab9bf6,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-n-a35467bd0b,},FirstTimestamp:2026-04-23 23:17:43.820801189 +0000 UTC m=+188.030029211,LastTimestamp:2026-04-23 23:17:43.820801189 +0000 UTC m=+188.030029211,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-n-a35467bd0b,}"